Tumgik
#his back could be described by a logarithm function
caemidraws · 2 years
Photo
Tumblr media
---
2K notes · View notes
flightsmoke57 · 2 years
Text
Green Cleaning For Health And A Healthy Surroundings
Limit use of disinfectants to areas the place individuals are prone to come into contact with contaminated surfaces (e.g., rest room fixtures, doorknobs, other high-touch surfaces). Many basic function cleansing tasks don't usually require the use of disinfectants (e.g., walls, flooring, different surfaces with minimal hand contact). Other biodegradable oil soaps and dish soaps can be found in some areas, most frequently in natural-food stores or from door-to-door salespeople. Wring the cloth to spread the combination further into the material and apply to the furniture utilizing wide strokes. To cut back static cling, dampen your palms and shake out your garments as you remove them from the dryer. Line-drying clothing or utilizing dryer balls are some other alternate options. Baking soda or vinegar with lemon juice in small dishes absorbs odors around the house. Unscented soap in liquid form is biodegradable and can clear just about anything. Castile cleaning soap is one example of an excellent, versatile cleaning ingredient. To make laundry detergent, mix 1 cup washing soda, ½ cup baking soda, ½ cup citric acid, ¼ cup salt and 1 bar of finely grated glycerin cleaning soap, finely grated. It’s very difficult to find organic cleaning merchandise on your laundry until you make it your self. Combine 2 tablespoons of olive oil, 1 tablespoon of white vinegar, and a quart of warm water in a sprig bottle. Be positive to wipe off well when using this cleaner for your wood flooring as olive oil leaves a slippery residue. This green cleaning recipe will go away you with glowing clean windows and mirrors. Slaughterhouse Cleansing And Sanitation Agent that destroys all vegetative bacteria , lipid viruses, some nonlipid viruses, and some fungi, however not bacterial spores. Agent capable of killing bacterial spores when utilized in enough focus beneath appropriate conditions. State of getting actual or potential contact with microorganisms. As used in well being care, the time period usually refers again to the presence of microorganisms that might produce illness or infection. Some of them are helpful, however others are harmful and cause disease. The degree of microbial control could be evaluated using a microbial demise curve to describe the progress and effectiveness of a specific protocol. When uncovered to a selected microbial control protocol, a hard and fast percentage of the microbes within the population will die. Because the speed of killing stays constant even when the population measurement varies, the share killed is extra helpful info than the absolute number of microbes killed. Death curves are sometimes plotted as semilog plots identical to microbial progress curves because the discount in microorganisms is typically logarithmic . Ultrasonic leaching has been investigated for the decontamination of different varieties of soils from landfills, mining spills, and river sediments aswell as numerous forms of contaminants like natural compounds. The software of ultrasound in air pollution management is predicated on acoustic agglomeration phenomenon that makes small particles precipitated for simple removal. Acoustic agglomeration is a process during which excessive intensity sound waves produce relative movement and collisions amongst fantastic particles suspended in gaseous media. Top Rated Stress Washing In Jacksonville, Fl Power washing makes your home look move-in ready. Happy Home boasts fully certified technicians, industrial grade supplies and the gear needed to supply the perfect outcomes time and time again. We take nice pride within the evaluations gained from our happy clients, and would love nothing greater than the possibility to restore your Happiness at Home too. They do an excellent job, our windows look great, inside and outside. They are great to work with and very professional. I’ve used many window cleaners and these guys are the most effective. Provided written quote, confirmed up on time, he and his crew did the work the next day. Asked them to switch some bulbs in my movement sensors. Cleaned all my gutters and I actually have a pretty big house. All of our providers can help you be happy with your house's curb appeal. You'll enjoy figuring out that you just're doing everything it takes to guard your Happy Valley residence and the the rest of your property. Contact us right now for a free strain washing estimate. You'll have the flexibility to access our companies during our weekend hours and can be succesful of arrange a consultation to go over the projects you have in mind. Alabama's situations are ripe for algae development due to spring and summer warmth and occasional heavy thunderstorms. Most instances, algae shows up on siding as black streaks or uninteresting green growth, and it could possibly harm your house's siding over time. Coronavirus Illness Covid When you are done cleaning, simply press a button on the deal with to dispose of the grody sponge with out getting your hands soiled. Clorox Bathroom Foamer with Bleach, a spray that's hard on germs, mildew, mildew, and soap scum, but easy on the pockets. Scrub Free Bathroom Cleaner with Oxi Clean, which'll make filth and cleaning soap scum tremendous easy to wipe away with out having to add an excessive amount of elbow grease to the combo. But what if I informed you there’s a approach to clean your toilet as quickly as and by no means clean it again? Stick with the 1 tsp except you need to rinse the answer. Alcohol adjustments to chloroform when mixed with bleach. Breathing in chloroform could cause fatigue, dizziness, and fainting. Before you start utilizing bleach all over the place, it's important to know that bleach can burn your skin and provides off dangerous fumes. Similar to the various other menial maintenance and household tasks that include homeownership, cleaning your bathroom is oftentimes found on the very backside of your chore list. When used within the toilet, baking soda acts as a deodorizer, to raise stains and smells trapped on bathroom surfaces (and everyone knows how sticky a relaxation room can get!). Keep merchandise, such as robust acids and alkalis, directed away from skin and eyes when in use. Wear protective clothing, including gloves, security goggles, and an apron. Cleaning Drapes To Forestall Dust Buildup This can result in repairs, alternative, and generally they should repaint walls if hardware scratches the paint. Avoid all these potential damages and further costs by selecting Beautiful Windows Blinds. Dusty and soiled drapes and curtains usually are not only unsightly but could be a well being concern for folks with allergy symptoms. 清潔公司 -cleaning is the safest route for some drapes and curtains, corresponding to wool, pleated items, or heavily structured swags. Once the steam cleaner is ready to go, begin with the back of the curtain and work from high to backside, holding the steamer 2 to three inches from the fabric. Once you complete the again aspect of the curtain, repeat the method on the entrance. After a 15-minute drying period, you can decide whether you need a second round of steaming or spot cleansing. Areas to concentrate on when de-cluttering embody countertops, tables, bookshelves, and desks. You can even check your storage room and pantry to see if there is something you would remove. If you've allergic reactions or bronchial asthma, attempt washing your sheets extra often to see when you get any enchancment in your symptoms. When cleaning your bed room, you wish to wash your bedsheets and pillowcases no much less than once a week. If you have pets, you could need to wash your sheets extra usually, similar to each three or four days. One Of The Best Power Washing Services In Scottsdale Labor Panes will coordinate a logical and environment friendly workflow for all of your exterior cleansing companies. We will communicate with the appropriate property managers at each essential step alongside the means in which. Professional window washers use skilled grade squeegees to scrub windows. Squeegee features like a dual-sided blade, a comfortable handle grip, and an arched blade are all expected from any high shelf squeegee. Conversely, low cost window cleaners will encourage squeaky screeches out of your windows with their scratchy blades or postpone rust across the screws with unwashed arms. It is dependent upon a couple of issues - One is how a lot sunlight that your personal home receives. Professional window cleaners also have the tools to protect themselves, including correct gloves and safety glasses. Not cleaning home windows in the appropriate order is one other danger. Some individuals attempt to clean their home windows first, however following that with strain washing can leave behind residue. A thorough cleansing ought to be done after stress washing. A skilled window cleaner will use proven, safer methods, and ensure their cleaning tools is working correctly. HOW TO KEEP CLEAN BETWEEN SERVICES One of one of the best things you are in a place to do for your windows is to ditch ammonia and alcohol based mostly window cleaners. Replace it with both a white vinegar primarily based cleaner, Window Gang Blue® or simply use a damp microfiber cloth followed by a dry microfiber cloth. HOW TO KEEP CLEAN BETWEEN SERVICES One of one of the best things you are able to do on your windows is to ditch ammonia based mostly window cleaners. We safely apply compounds that take away mildew, mildew and bacteria to wash and defend your house from decomposition. Our soft wash will improve the health of your house inside and out. The 0 diploma tip is a pin point tip which delivers water at an extreme drive. The 15 degree tip is somewhat wider, therefore has less pressure. Each width wider spreads the force of the water extra, subsequently creating much less pressure from the spray. We can proudly say that our customers are so proud of us, that they continuously refer us again and again to their friends, household, neighbors and colleagues. Every job is exclusive, so in order to give you an accurate estimate we will require your full address. We additionally want your contact information to ship you your free estimate. The Means To Clear Your Lounge But in case your flooring has marks or stains that still won't come off, you ought to use stronger stuff. Isopropyl alcohol, bought as a disinfectant at drugstores, is a gentle solvent. It's the most effective cleaner for heel marks and works on other powerful stains too. Remember that every one these merchandise are flammable; turn off any nearby pilot lights and hold rags out to dry earlier than throwing them away. Learn how to use it to take away powerful stains from vinyl flooring. Simply take away the fixture heads, and soak them in warm vinegar for a couple of minutes. For powerful spots, you may need to brush away the debris with an old toothbrush. Rinse these pieces totally earlier than changing them. If you wish to keep away from the chemical stainless-steel cleaning merchandise, you can combine a small amount of liquid dish soap with a big pot of hot water. Get a microfiber fabric slightly damp with the soapy blend, and gently wipe down the stainless-steel surfaces. To avoid cleaning soap and water spots, rinse the area with clear water. Standing on a step stool or ladder, rigorously wipe every fan blade with a ceiling fan duster or clean, dry microfiber cloth. If it is protected to take action, you'll be able to run a vacuum with an prolonged hose and dirt attachment gently throughout the blade tops. Stand on a step stool so you'll have the ability to see what you’re doing, and clear away the gunk utilizing warm, soapy water. Give the entrance door some extra love by cleaning it inside and outside with warm, soapy water on a well-wrung-out delicate sponge, and dry it with a delicate fabric. Use scissors to fastidiously take away hair from the rotating brush at the bottom of your vacuum and exchange the filters. Toss cleaning cloths and rags into the washing machine. Sanitize mops and sponges utilizing a mixture of bleach and water.
1 note · View note
lealeagarcia-blog · 5 years
Text
HISTORY OF COMPUTERS
A computer might be described with deceptive simplicity as “an apparatus that performs routine calculations automatically.” Such a definition would owe its deceptiveness to a naive and narrow view of calculation as a strictly mathematical process. In fact, calculation underlies many activities that are not normally thought of as mathematical. Walking across a room, for instance, requires many complex, albeit subconscious, calculations. Computers, too, have proved capable of solving a vast array of problems, from balancing a checkbook to even—in the form of guidance systems for robots—walking across a room.
Before the true power of computing could be realized, therefore, the naive view of calculation had to be overcome. The inventors who laboured to bring the computer into the world had to learn that the thing they were inventing was not just a number cruncher, not merely a calculator. For example, they had to learn that it was not necessary to invent a new computer for every new calculation and that a computer could be designed to solve numerous problems, even problems not yet imagined when the computer was built. They also had to learn how to tell such a general problem-solving computer what problem to solve. In other words, they had to invent programming.
They had to solve all the heady problems of developing such a device, of implementing the design, of actually building the thing. The history of the solving of these problems is the history of the computer. That history is covered in this section, and links are provided to entries on many of the individuals and companies mentioned. In addition, see the articles computer science and supercomputer.
Early history
Computer precursors
The abacus
The earliest known calculating device is probably the abacus. It dates back at least to 1100 BCE and is still in use today, particularly in Asia. Now, as then, it typically consists of a rectangular frame with thin parallel rods strung with beads. Long before any systematic positional notation was adopted for the writing of numbers, the abacus assigned different units, or weights, to each rod. This scheme allowed a wide range of numbers to be represented by just a few beads and, together with the invention of zero in India, may have inspired the invention of the Hindu-Arabic number system. In any case, abacus beads can be readily manipulated to perform the common arithmetical operations—addition, subtraction, multiplication, and division—that are useful for commercial transactions and in bookkeeping.
The abacus is a digital device; that is, it represents values discretely. A bead is either in one predefined position or another, representing unambiguously, say, one or zero.
Analog calculators: from Napier’s logarithms to the slide rule
Calculating devices took a different turn when John Napier, a Scottish mathematician, published his discovery of logarithms in 1614. As any person can attest, adding two 10-digit numbers is much simpler than multiplying them together, and the transformation of a multiplication problem into an addition problem is exactly what logarithms enable. This simplification is possible because of the following logarithmic property: the logarithm of the product of two numbers is equal to the sum of the logarithms of the numbers. By 1624, tables with 14 significant digits were available for the logarithms of numbers from 1 to 20,000, and scientists quickly adopted the new labour-saving tool for tedious astronomical calculations.
Most significant for the development of computing, the transformation of multiplication into addition greatly simplified the possibility of mechanization. Analog calculating devices based on Napier’s logarithms—representing digital values with analogous physical lengths—soon appeared. In 1620 Edmund Gunter, the English mathematician who coined the terms cosine and cotangent, built a device for performing navigational calculations: the Gunter scale, or, as navigators simply called it, the gunter. About 1632 an English clergyman and mathematician named William Oughtred built the first slide rule, drawing on Napier’s ideas. That first slide rule was circular, but Oughtred also built the first rectangular one in 1633. The analog devices of Gunter and Oughtred had various advantages and disadvantages compared with digital devices such as the abacus. What is important is that the consequences of these design decisions were being tested in the real world.
Advertisement
Digital calculators: from the Calculating Clock to the Arithmometer
In 1623 the German astronomer and mathematician Wilhelm Schickard built the first calculator. He described it in a letter to his friend the astronomer Johannes Kepler, and in 1624 he wrote again to explain that a machine he had commissioned to be built for Kepler was, apparently along with the prototype, destroyed in a fire. He called it a Calculating Clock, which modern engineers have been able to reproduce from details in his letters. Even general knowledge of the clock had been temporarily lost when Schickard and his entire family perished during the Thirty Years’ War.
But Schickard may not have been the true inventor of the calculator. A century earlier, Leonardo da Vinci sketched plans for a calculator that were sufficiently complete and correct for modern engineers to build a calculator on their basis.
The first calculator or adding machine to be produced in any quantity and actually used was the Pascaline, or Arithmetic Machine, designed and built by the French mathematician-philosopher Blaise Pascal between 1642 and 1644. It could only do addition and subtraction, with numbers being entered by manipulating its dials. Pascal invented the machine for his father, a tax collector, so it was the first business machine too (if one does not count the abacus). He built 50 of them over the next 10 years.
Advertisement
In 1671 the German mathematician-philosopher Gottfried Wilhelm von Leibnizdesigned a calculating machine called the Step Reckoner. (It was first built in 1673.) The Step Reckoner expanded on Pascal’s ideas and did multiplication by repeated addition and shifting.
Leibniz was a strong advocate of the binary number system. Binary numbers are ideal for machines because they require only two digits, which can easily be represented by the on and off states of a switch. When computers became electronic, the binary system was particularly appropriate because an electrical circuit is either on or off. This meant that on could represent true, off could represent false, and the flow of current would directly represent the flow of logic.
Leibniz was prescient in seeing the appropriateness of the binary system in calculating machines, but his machine did not use it. Instead, the Step Reckoner represented numbers in decimal form, as positions on 10-position dials. Even decimal representation was not a given: in 1668 Samuel Morland invented an adding machine specialized for British money—a decidedly nondecimal system.
Pascal’s, Leibniz’s, and Morland’s devices were curiosities, but with the Industrial Revolution of the 18th century came a widespread need to perform repetitive operations efficiently. With other activities being mechanized, why not calculation? In 1820 Charles Xavier Thomas de Colmar of France effectively met this challenge when he built his Arithmometer, the first commercial mass-produced calculating device. It could perform addition, subtraction, multiplication, and, with some more elaborate user involvement, division. Based on Leibniz’s technology, it was extremely popular and sold for 90 years. In contrast to the modern calculator’s credit-card size, the Arithmometer was large enough to cover a desktop.
The Jacquard loom
Calculators such as the Arithmometer remained a fascination after 1820, and their potential for commercial use was well understood. Many other mechanical devices built during the 19th century also performed repetitive functions more or less automatically, but few had any application to computing. There was one major exception: the Jacquard loom, invented in 1804–05 by a French weaver, Joseph-Marie Jacquard.
The Jacquard loom was a marvel of the Industrial Revolution. A textile-weaving loom, it could also be called the first practical information-processing device. The loom worked by tugging various-coloured threads into patterns by means of an array of rods. By inserting a card punched with holes, an operator could control the motion of the rods and thereby alter the pattern of the weave. Moreover, the loom was equipped with a card-reading device that slipped a new card from a prepunched deck into place every time the shuttle was thrown, so that complex weaving patterns could be automated.
What was extraordinary about the device was that it transferred the design process from a labour-intensive weaving stage to a card-punching stage. Once the cards had been punched and assembled, the design was complete, and the loom implemented the design automatically. The Jacquard loom, therefore, could be said to be programmed for different patterns by these decks of punched cards.
For those intent on mechanizing calculations, the Jacquard loom provided important lessons: the sequence of operations that a machine performs could be controlled to make the machine do something quite different; a punched card could be used as a medium for directing the machine; and, most important, a device could be directed to perform different tasks by feeding it instructions in a sort of language—i.e., making the machine programmable.
It is not too great a stretch to say that, in the Jacquard loom, programming was invented before the computer. The close relationship between the device and the program became apparent some 20 years later, with Charles Babbage’s invention of the first computer.
The first computer
By the second decade of the 19th century, a number of ideas necessary for the invention of the computer were in the air. First, the potential benefits to science and industry of being able to automate routine calculations were appreciated, as they had not been a century earlier. Specific methods to make automated calculation more practical, such as doing multiplication by adding logarithms or by repeating addition, had been invented, and experience with both analog and digital devices had shown some of the benefits of each approach. The Jacquard loom (as described in the previous section, Computer precursors) had shown the benefits of directing a multipurpose device through coded instructions, and it had demonstrated how punched cards could be used to modify those instructions quickly and flexibly. It was a mathematical genius in England who began to put all these pieces together.
The Difference Engine
Charles Babbage was an English mathematician and inventor: he invented the cowcatcher, reformed the British postal system, and was a pioneer in the fields of operations research and actuarial science. It was Babbage who first suggested that the weather of years past could be read from tree rings. He also had a lifelong fascination with keys, ciphers, and mechanical dolls.
As a founding member of the Royal Astronomical Society, Babbage had seen a clear need to design and build a mechanical device that could automate long, tedious astronomical calculations. He began by writing a letter in 1822 to Sir Humphry Davy, president of the Royal Society, about the possibility of automating the construction of mathematical tables—specifically, logarithmtables for use in navigation. He then wrote a paper, “On the Theoretical Principles of the Machinery for Calculating Tables,” which he read to the society later that year. (It won the Royal Society’s first Gold Medal in 1823.) Tables then in use often contained errors, which could be a life-and-death matter for sailors at sea, and Babbage argued that, by automating the production of the tables, he could assure their accuracy. Having gained support in the society for his Difference Engine, as he called it, Babbage next turned to the British government to fund development, obtaining one of the world’s first government grants for research and technological development.
Babbage approached the project very seriously: he hired a master machinist, set up a fireproof workshop, and built a dustproof environment for testing the device. Up until then calculations were rarely carried out to more than 6 digits; Babbage planned to produce 20- or 30-digit results routinely. The Difference Engine was a digital device: it operated on discrete digits rather than smooth quantities, and the digits were decimal (0–9), represented by positions on toothed wheels, rather than the binary digits that Leibniz favoured (but did not use). When one of the toothed wheels turned from 9 to 0, it caused the next wheel to advance one position, carrying the digit just as Leibniz’s Step Reckoner calculator had operated.
The Difference Engine was more than a simple calculator, however. It mechanized not just a single calculation but a whole series of calculations on a number of variables to solve a complex problem. It went far beyond calculators in other ways as well. Like modern computers, the Difference Engine had storage—that is, a place where data could be held temporarily for later processing—and it was designed to stamp its output into soft metal, which could later be used to produce a printing plate.
Nevertheless, the Difference Engine performed only one operation. The operator would set up all of its data registers with the original data, and then the single operation would be repeatedly applied to all of the registers, ultimately producing a solution. Still, in complexity and audacity of design, it dwarfed any calculating device then in existence.
The full engine, designed to be room-size, was never built, at least not by Babbage. Although he sporadically received several government grants—governments changed, funding often ran out, and he had to personally bear some of the financial costs—he was working at or near the tolerances of the construction methods of the day, and he ran into numerous construction difficulties. All design and construction ceased in 1833, when Joseph Clement, the machinist responsible for actually building the machine, refused to continue unless he was prepaid. (The completed portion of the Difference Engine is on permanent exhibition at the Science Museum in London.)
The Analytical Engine
While working on the Difference Engine, Babbage began to imagine ways to improve it. Chiefly he thought about generalizing its operation so that it could perform other kinds of calculations. By the time the funding had run out in 1833, he had conceived of something far more revolutionary: a general-purpose computing machine called the Analytical Engine.
The Analytical Engine was to be a general-purpose, fully program-controlled, automatic mechanical digital computer. It would be able to perform any calculation set before it. Before Babbage there is no evidence that anyone had ever conceived of such a device, let alone attempted to build one. The machine was designed to consist of four components: the mill, the store, the reader, and the printer. These components are the essential components of every computer today. The mill was the calculating unit, analogous to the central processing unit (CPU) in a modern computer; the store was where data were held prior to processing, exactly analogous to memory and storage in today’s computers; and the reader and printer were the input and output devices.
As with the Difference Engine, the project was far more complex than anything theretofore built. The store was to be large enough to hold 1,000 50-digit numbers; this was larger than the storage capacity of any computer built before 1960. The machine was to be steam-driven and run by one attendant. The printing capability was also ambitious, as it had been for the Difference Engine: Babbage wanted to automate the process as much as possible, right up to producing printed tables of numbers.
The reader was another new feature of the Analytical Engine. Data (numbers) were to be entered on punched cards, using the card-reading technology of the Jacquard loom. Instructions were also to be entered on cards, another idea taken directly from Jacquard. The use of instruction cards would make it a programmable device and far more flexible than any machine then in existence. Another element of programmability was to be its ability to execute instructions in other than sequential order. It was to have a kind of decision-making ability in its conditional control transfer, also known as conditional branching, whereby it would be able to jump to a different instruction depending on the value of some data. This extremely powerful feature was missing in many of the early computers of the 20th century.
By most definitions, the Analytical Engine was a real computer as understood today—or would have been, had not Babbage run into implementation problems again. Actually building his ambitious design was judged infeasible given the current technology, and Babbage’s failure to generate the promised mathematical tables with his Difference Engine had dampened enthusiasm for further government funding. Indeed, it was apparent to the British government that Babbage was more interested in innovation than in constructing tables.
All the same, Babbage’s Analytical Engine was something new under the sun. Its most revolutionary feature was the ability to change its operation by changing the instructions on punched cards. Until this breakthrough, all the mechanical aids to calculation were merely calculators or, like the Difference Engine, glorified calculators. The Analytical Engine, although not actually completed, was the first machine that deserved to be called a computer.
Lady Lovelace, the first programmer
The distinction between calculator and computer, although clear to Babbage, was not apparent to most people in the early 19th century, even to the intellectually adventuresome visitors at Babbage’s soirees—with the exception of a young girl of unusual parentage and education.
Augusta Ada King, the countess of Lovelace, was the daughter of the poet Lord Byron and the mathematically inclined Anne Millbanke. One of her tutors was Augustus De Morgan, a famous mathematician and logician. Because Byron was involved in a notorious scandal at the time of her birth, Ada’s mother encouraged her mathematical and scientific interests, hoping to suppress any inclination to wildness she may have inherited from her father.
Toward that end, Lady Lovelace attended Babbage’s soirees and became fascinated with his Difference Engine. She also corresponded with him, asking pointed questions. It was his plan for the Analytical Engine that truly fired her imagination, however. In 1843, at age 27, she had come to understand it well enough to publish the definitive paper explaining the device and drawing the crucial distinction between this new thing and existing calculators. The Analytical Engine, she argued, went beyond the bounds of arithmetic. Because it operated on general symbols rather than on numbers, it established “a link…between the operations of matter and the abstract mental processes of the most abstract branch of mathematical science.” It was a physical device that was capable of operating in the realm of abstract thought.
Lady Lovelace rightly reported that this was not only something no one had built, it was something that no one before had even conceived. She went on to become the world’s only expert on the process of sequencing instructions on the punched cards that the Analytical Engine used; that is, she became the world’s first computer programmer.
One feature of the Analytical Engine was its ability to place numbers and instructions temporarily in its store and return them to its mill for processing at an appropriate time. This was accomplished by the proper sequencing of instructions and data in its reader, and the ability to reorder instructions and data gave the machine a flexibility and power that was hard to grasp. The first electronic digital computers of a century later lacked this ability. It was remarkable that a young scholar realized its importance in 1840, and it would be 100 years before anyone would understand it so well again. In the intervening century, attention would be diverted to the calculator and other business machines.
Early business machines
Throughout the 19th century, business machines were coming into common use. Calculators became available as a tool of commerce in 1820 (see the earlier section Digital calculators), and in 1874 the Remington Arms Company, Inc., sold the first commercially viable typewriter. Other machines were invented for other specific business tasks. None of these machines was a computer, but they did advance the state of practical mechanical knowledge—knowledge that would be used in computers later.
One of these machines was invented in response to a sort of constitutionalcrisis in the United States: the census tabulator.
Herman Hollerith’s census tabulator
The U.S. Constitution mandates that a census of the population be performed every 10 years. The first attempt at any mechanization of the census was in 1870, when statistical data were transcribed onto a rolling paper tape displayed through a small slotted window. As the size of America’s population exploded in the 19th century and the number of census questions expanded, the urgency of further mechanization became increasingly clear.
After graduating from the Columbia University School of Mines, New York City, in 1879, Herman Hollerith obtained his first job with one of his former professors, William P. Trowbridge, who had received a commission as a special agent for the 1880 census. It was while employed at the Census Office that Hollerith first saw the pressing need for automating the tabulation of statistical data.
Over the next 10 years Hollerith refined his ideas, obtaining his first patent in 1884 for a machine to punch and count cards. He then organized the health records for Baltimore, Maryland, for New York City, and for the state of New Jersey—all in preparation for winning the contract to tabulate the 1890 U.S. Census. The success of the U.S. census opened European governments to Hollerith’s machines. Most notably, a contract with the Russian government, signed on December 15, 1896, may have induced him to incorporate as the Tabulating Machine Company on December 5, 1896.
Other early business machine companies
Improvements in calculators continued: by the 1880s they could add in the accumulation of partial results, store past results, and print. Then, in 1892, William Seward Burroughs, who along with two other St. Louis, Missouri, businessmen had started the American Arithmometer Company in 1886 in order to build adding machines, obtained a patent for one of the first truly practical and commercially successful calculators. Burroughs died in 1898, and his company was reorganized as the Burroughs Adding Machine Company in Detroit, Michigan, in 1905.
All the calculators—and virtually all the information-processing devices—sold at this time were designed for commercial purposes, not scientific research. By the turn of the century, commercial calculating devices were in common use, as were other special-purpose machines such as one that generated serial numbers for banknotes. As a result, many of the business machine companies in the United States were doing well, including Hollerith’s Tabulating Machine Company.
In 1911 several of these companies combined to form the Computing-Tabulating-Recording Company, or CTR. In 1914 Thomas J. Watson, Sr., left his sales manager position at the National Cash Register Company to become president of CTR, and 10 years later CTR changed its name to International Business Machines Corporation, or IBM. In the second half of the century, IBM would become the giant of the world computer industry, but such commercial gains did not take place until enormous progress had been made in the theoretical understanding of the modern computer during the remarkable decades of the 1930s and ’40s. (This progress is described in the next section, Invention of the modern computer.)
Invention of the modern computer
Early experiments
As the technology for realizing a computer was being honed by the business machine companies in the early 20th century, the theoretical foundations were being laid in academia. During the 1930s two important strains of computer-related research were being pursued in the United States at two universities in Cambridge, Massachusetts. One strain produced the Differential Analyzer, the other a series of devices ending with the Harvard Mark IV.
Vannevar Bush’s Differential Analyzer
In 1930 an engineer named Vannevar Bush at the Massachusetts Institute of Technology (MIT) developed the first modern analog computer. The Differential Analyzer, as he called it, was an analog calculator that could be used to solve certain classes of differential equations, a type of problem common in physics and engineering applications that is often very tedious to solve. Variables were represented by shaft motion, and addition and multiplication were accomplished by feeding the values into a set of gears. Integration was carried out by means of a knife-edged wheel rotating at a variable radius on a circular table. The individual mechanical integrators were then interconnected to solve a set of differential equations.
The Differential Analyzer proved highly useful, and a number of them were built and used at various universities. Still the device was limited to solving this one class of problem, and, as is the case for all analog devices, it produced approximate, albeit practical, solutions. Nevertheless, important applications for analog computers and analog-digital hybrid computers still exist, particularly for simulating complicated dynamical systems such as aircraft flight, nuclear power plant operations, and chemical reactions.
Howard Aiken’s digital calculators
While Bush was working on analog computing at MIT, across town Harvard professor Howard Aiken was working with digital devices for calculation. He had begun to realize in hardware something like Babbage’s Analytical Engine, which he had read about. Starting in 1937, he laid out detailed plans for a series of four calculating machines of increasing sophistication, based on different technologies, from the largely mechanical Mark I to the electronic Mark IV.
Aiken was methodically exploring the technological advances made since the mechanical assembly and steam power available to Babbage. Electromagnetic relay circuits were already being used in business machines, and the vacuum tube—a switch with no moving parts, very high speed action, and greater reliability than electromechanical relays—was quickly put to use in the early experimental machines.
The business machines of the time used plugboards (something like telephone switchboards) to route data manually, and Aiken chose not to use them for the specification of instructions. This turned out to make his machine much easier to program than the more famous ENIAC, designed somewhat later, which had to be manually rewired for each program.
From 1939 to 1944 Aiken, in collaboration with IBM, developed his first fully functional computer, known as the Harvard Mark I. The machine, like Babbage’s, was huge: more than 50 feet (15 metres) long, weighing five tons, and consisting of about 750,000 separate parts, it was mostly mechanical. For input and output it used three paper-tape readers, two card readers, a card punch, and two typewriters. It took between three and six seconds to add two numbers. Aiken developed three more such machines (Mark II–IV) over the next few years and is credited with developing the first fully automatic large-scale calculator.
The Turing machine
Alan Turing, while a mathematics student at the University of Cambridge, was inspired by German mathematician David Hilbert’s formalist program, which sought to demonstrate that any mathematical problem can potentially be solved by an algorithm—that is, by a purely mechanical process. Turing interpreted this to mean a computing machine and set out to design one capable of resolving all mathematical problems, but in the process he proved in his seminal paper “On Computable Numbers, with an Application to the Entscheidungsproblem [‘Halting Problem’]” (1936) that no such universal mathematical solver could ever exist.
In order to design his machine (known to posterity as the “Turing machine”), he needed to find an unambiguous definition of the essence of a computer. In doing so, Turing worked out in great detail the basic concepts of a universal computing machine—that is, a computing machine that could, at least in theory, do anything that a special-purpose computing device could do. In particular, it would not be limited to doing arithmetic. The internal states of the machine could represent numbers, but they could equally well represent logic values or letters. In fact, Turing believed that everything could be represented symbolically, even abstract mental states, and he was one of the first advocates of the artificial-intelligence position that computers can potentially “think.”
Turing’s work up to this point was entirely abstract, entirely a theoretical demonstration. Nevertheless, he made it clear from the start that his results implied the possibility of building a machine of the sort he described. His work characterized the abstract essence of any computing device so well that it was in effect a challenge to actually build one.
Turing’s work had an immediate effect on only a small number of academics at a few universities who were interested in the concept of computing machinery. It had no immediate effect on the growing industry of business machines, all of which were special-purpose devices. But to the few who were interested, Turing’s work was an inspiration to pursue something of which most of the world had not even conceived: a universal computing machine.
Pioneering work
The Atanasoff-Berry Computer
It was generally believed that the first electronic digital computers were the Colossus, built in England in 1943, and the ENIAC, built in the United States in 1945. However, the first special-purpose electronic computer may actually have been invented by John Vincent Atanasoff, a physicist and mathematician at Iowa State College (now Iowa State University), during 1937–42. (Atanasoff also claimed to have invented the term analog computer to describe machines such as Vannevar Bush’s Differential Analyzer.) Together with his graduate assistant Clifford E. Berry, Atanasoff built a successful small prototype in 1939 for the purpose of testing two ideas central to his design: capacitors to store data in binary form and electronic logic circuits to perform addition and subtraction. They then began the design and construction of a larger, more general-purpose computer, known as the Atanasoff-Berry Computer, or ABC.
Various components of the ABC were designed and built from 1939 to 1942, but development was discontinued with the onset of World War II. The ABC featured about 300 vacuum tubes for control and arithmetic calculations, use of binary numbers, logic operations (instead of direct counting), memory capacitors, and punched cards as input/output units. (At Atanasoff’s invitation, another early computer pioneer, John Mauchly, stayed at his home and was freely shown his work for several days in June 1941. For more on the ramifications of this visit, see BTW: Computer patent wars.)
The first computer network
Between 1940 and 1946 George Stibitz and his team at Bell Laboratories built a series of machines with telephone technologies—i.e., employing electromechanical relays. These were the first machines to serve more than one user and the first to work remotely over telephone lines. However, because they were based on slow mechanical relays rather than electronic switches, they became obsolete almost as soon as they were constructed.
Konrad Zuse
Meanwhile, in Germany, engineer Konrad Zuse had been thinking about calculating machines. He was advised by a calculator manufacturer in 1937 that the field was a dead end and that every computing problem had already been solved. Zuse had something else in mind, though.
For one thing, Zuse worked in binary from the beginning. All of his prototype machines, built in 1936, used binary representation in order to simplify construction. This had the added advantage of making the connection with logic clearer, and Zuse worked out the details of how the operations of logic (e.g., AND, OR, and NOT) could be mapped onto the design of the computer’s circuits. (English mathematician George Boole had shown the connection between logic and mathematics in the mid-19th century, developing an algebra of logic now known as Boolean algebra.) Zuse also spent more time than his predecessors and contemporaries developing software for his computer, the language in which it was to be programmed. (His contributions to programming are examined in the section Programming languages.) Although all his early prewar machines were really calculators—not computers—his Z3, completed in December 1941 (and destroyed on April 6, 1945, during an Allied air raid on Berlin), was the first program-controlled processor.
Because all Zuse’s work was done in relative isolation, he knew little about work on computers in the United States and England, and, when the war began, the isolation became complete.
The following section, Developments during World War II, examines the development during the 1940s of the first fully functional digital computers.
Developments during World War II
Colossus
The exigencies of war gave impetus and funding to computer research. For example, in Britain the impetus was code breaking. The Ultra project was funded with much secrecy to develop the technology necessary to crack ciphers and codes produced by the German electromechanical devices known as the Enigma and the Geheimschreiber (“Secret Writer”). The first in a series of important code-breaking machines, Colossus, also known as the Mark I, was built under the direction of Sir Thomas Flowers and delivered in December 1943 to the code-breaking operation at Bletchley Park, a government research centre north of London. It employed approximately 1,800 vacuum tubes for computations. Successively larger and more elaborate versions were built over the next two years.
The Ultra project had a gifted mathematician associated with the Bletchley Park effort, and one familiar with codes. Alan Turing, who had earlier articulated the concept of a universal computing device (described in the section The Turing machine), may have pushed the project farther in the direction of a general-purpose device than his government originally had in mind. Turing’s advocacy helped keep up government support for the project.
Although it lacked some characteristics now associated with computers, Colossus can plausibly be described as the first electronic digital computer, and it was certainly a key stepping stone to the development of the modern computer. Although Colossus was designed to perform specific cryptographic-related calculations, it could be used for more-generalized purposes. Its design pioneered the massive use of electronics in computation, and it embodied an insight from Flowers of the importance of storing data electronically within the machine. The operation at Bletchley foreshadowed the modern data centre.
Colossus was successful in its intended purpose: the German messages it helped to decode provided information about German battle orders, supplies, and personnel; it also confirmed that an Allied deception campaign, Operation Fortitude, was working.
The series of Colossus computers were disassembled after the war, and most information about them remained classified until the 1990s. In 1996 the basic Colossus machine was rebuilt and switched on at Bletchley Park.
The Z4
In Germany, Konrad Zuse began construction of the Z4 in 1943 with funding from the Air Ministry. Like his Z3 (described in the section Konrad Zuse), the Z4 used electromechanical relays, in part because of the difficulty in acquiring the roughly 2,000 necessary vacuum tubes in wartime Germany. The Z4 was evacuated from Berlin in early 1945, and it eventually wound up in Hinterstein, a small village in the Bavarian Alps, where it remained until Zuse brought it to the Federal Technical Institute in Zürich, Switzerland, for refurbishing in 1950. Although unable to continue with hardware development, Zuse made a number of advances in software design.
Zuse’s use of floating-point representation for numbers—the significant digits, known as the mantissa, are stored separately from a pointer to the decimal point, known as the exponent, allowing a very large range of numbers to be handled—was far ahead of its time. In addition, Zuse developed a rich set of instructions, handled infinite values correctly, and included a “no-op”—that is, an instruction that did nothing. Only significant experience in programming would show the need for something so apparently useless.
The Z4’s program was punched on used movie film and was separate from the mechanical memory for data (in other words, there was no stored program). The machine was relatively reliable (it normally ran all night unattended), but it had no decision-making ability. Addition took 0.5 to 1.25 seconds, multiplication 3.5 seconds.
ENIAC
In the United States, government funding went to a project led by John Mauchly, J. Presper Eckert, Jr., and their colleagues at the Moore School of Electrical Engineering at the University of Pennsylvania; their objective was an all-electronic computer. Under contract to the army and under the direction of Herman Goldstine, work began in early 1943 on the Electronic Numerical Integrator and Computer (ENIAC). The next year, mathematician John von Neumann, already on full-time leave from the Institute for Advanced Studies (IAS), Princeton, New Jersey, for various government research projects (including the Manhattan Project), began frequent consultations with the group.
ENIAC was something less than the dream of a universal computer. Designed for the specific purpose of computing values for artillery range tables, it lacked some features that would have made it a more generally useful machine. Like Colossus but unlike Howard Aiken’s machine (described in the section Early experiments), it used plugboards for communicating instructions to the machine; this had the advantage that, once the instructions were thus “programmed,” the machine ran at electronic speed. Instructions read from a card reader or other slow mechanical device would not have been able to keep up with the all-electronic ENIAC. The disadvantage was that it took days to rewire the machine for each new problem. This was such a liability that only with some generosity could it be called programmable.
Nevertheless, ENIAC was the most powerful calculating device built to date. Like Charles Babbage’s Analytical Engine and the Colossus, but unlike Aiken’s Mark I, Konrad Zuse’s Z4, and George Stibitz’s telephone-savvy machine, it did have conditional branching—that is, it had the ability to execute different instructions or to alter the order of execution of instructions based on the value of some data. (For instance, IF X > 5 THEN GO TO LINE 23.) This gave ENIAC a lot of flexibility and meant that, while it was built for a specific purpose, it could be used for a wider range of problems.
ENIAC was enormous. It occupied the 50-by-30-foot (15-by-9-metre) basement of the Moore School, where its 40 panels were arranged, U-shaped, along three walls. Each of the units was about 2 feet wide by 2 feet deep by 8 feet high (0.6 by 0.6 by 2.4 metres). With approximately 18,000 vacuum tubes, 70,000 resistors, 10,000 capacitors, 6,000 switches, and 1,500 relays, it was easily the most complex electronic system theretofore built. ENIAC ran continuously (in part to extend tube life), generating 150 kilowatts of heat, and could execute up to 5,000 additions per second, several orders of magnitude faster than its electromechanical predecessors. Colossus, ENAIC, and subsequent computers employing vacuum tubes are known as first-generation computers. (With 1,500 mechanical relays, ENIAC was still transitional to later, fully electronic computers.)
Completed by February 1946, ENIAC had cost the government $400,000, and the war it was designed to help win was over. Its first task was doing calculations for the construction of a hydrogen bomb. A portion of the machine is on exhibit at the Smithsonian Institution in Washington, D.C.
Toward the classical computer
Bigger brains
The computers built during the war were built under unusual constraints. The British work was largely focused on code breaking, the American work on computing projectile trajectories and calculations for the atomic bomb. The computers were built as special-purpose devices, although they often embodied more general-purpose computing capabilities than their specifications called for. The vacuum tubes in these machines were not entirely reliable, but with no moving parts they were more reliable than the electromechanical switches they replaced, and they were much faster. Reliability was an issue, since Colossus used some 1,500 tubes and ENIAC on the order of 18,000. But ENIAC was, by virtue of its electronic realization, 1,000 times faster than the Harvard Mark I. Such speed meant that the machine could perform calculations that were theretofore beyond human ability. Although tubes were a great advance over the electromechanical realization of Aiken or the steam-and-mechanical model of Babbage, the basic architectureof the machines (that is, the functions they were able to perform) was not much advanced beyond Babbage’s Difference Engine and Analytical Engine. In fact, the original name for ENIAC was Electronic Difference Analyzer, and it was built to perform much like Babbage’s Difference Engine.
After the war, efforts focused on fulfilling the idea of a general-purpose computing device. In 1945, before ENIAC was even finished, planning began at the Moore School for ENIAC’s successor, the Electronic Discrete Variable Automatic Computer, or EDVAC. (Planning for EDVAC also set the stage for an ensuing patent fight; see BTW: Computer patent wars.) ENIAC was hampered, as all previous electronic computers had been, by the need to use one vacuum tube to store each bit, or binary digit. The feasible number of vacuum tubes in a computer also posed a practical limit on storage capacity—beyond a certain point, vacuum tubes are bound to burn out as fast as they can be changed. For EDVAC, Eckert had a new idea for storage.
In 1880 French physicists Pierre and Jacques Curie had discovered that applying an electric current to a quartz crystal would produce a characteristic vibration and vice versa. During the 1930s at Bell Laboratories, William Shockley, later coinventor of the transistor, had demonstrated a device—a tube, called a delay line, containing water and ethylene glycol—for effecting a predictable delay in information transmission. Eckert had already built and experimented in 1943 with such a delay line (using mercury) in conjunctionwith radar research, and sometime in 1944 he hit upon the new idea of placing a quartz crystal at each end of the mercury delay line in order to sustain and modify the resulting pattern. In effect, he invented a new storage device. Whereas ENIAC required one tube per bit, EDVAC could use a delay line and 10 vacuum tubes to store 1,000 bits. Before the invention of the magnetic core memory and the transistor, which would eliminate the need for vacuum tubes altogether, the mercury delay line was instrumental in increasing computer storage and reliability.
Von Neumann’s “Preliminary Discussion”
But the design of the modern, or classical, computer did not fully crystallize until the publication of a 1946 paper by Arthur Burks, Herman Goldstine, and John von Neumann titled “Preliminary Discussion of the Logical Design of an Electronic Computing Instrument”. Although the paper was essentially a synthesis of ideas currently “in the air,” it is frequently cited as the birth certificate of computer science.
Among the principles enunciated in the paper were that data and instructions should be kept in a single store and that instructions should be encoded so as to be modifiable by other instructions. This was an extremely critical decision, because it meant that one program could be treated as data by another program. Zuse had considered and rejected this possibility as too dangerous. But its inclusion by von Neumann’s group made possible high-level programming languages and most of the advances in software of the following 50 years. Subsequently, computers with stored programs would be known as von Neumann machines.
One problem that the stored-program idea solved was the need for rapid access to instructions. Colossus and ENIAC had used plugboards, which had the advantage of enabling the instructions to be read in electronically, rather than by much slower mechanical card readers, but it also had the disadvantage of making these first-generation machines very hard to program. But if the instructions could be stored in the same electronic memory that held the data, they could be accessed as quickly as needed. One immediately obvious consequence was that EDVAC would need a lot more memory than ENIAC.
The first stored-program machines
Government secrecy hampered British efforts to build on wartime computer advances, but engineers in Britain still beat the Americans to the goal of building the first stored-program digital computer. At the University of Manchester, Frederic C. Williams and Tom Kilburn built a simple stored-program computer, known as the Baby, in 1948. This was built to test their invention of a way to store information on a cathode-ray tube that enabled direct access (in contrast to the mercury delay line’s sequential access) to stored information. Although faster than Eckert’s storage method, it proved somewhat unreliable. Nevertheless, it became the preferred storage method for most of the early computers worldwide that were not already committed to mercury delay lines.
By 1949 Williams and Kilburn had extended the Baby to a full-size computer, the Manchester Mark I. This had two major new features that were to become computer standards: a two-level store and instruction modification registers (which soon evolved into index registers). A magnetic drum was added to provide a random-access secondary storage device. Until machines were fitted with index registers, every instruction that referred to an address that varied as the program ran—e.g., an array element—had to be preceded by instructions to alter its address to the current required value. Four months after the Baby first worked, the British government contracted the electronics firm of Ferranti to build a production computer based on the prospective Mark I. This became the Ferranti Mark I—the first commercial computer—of which nine were sold.
Kilburn, Williams, and colleagues at Manchester also came up with a breakthrough that would revolutionize how a computer executed instructions: they made it possible for the address portion of an instruction to be modified while the program was running. Before this, an instruction specified that a particular action—say, addition—was to be performed on data in one or more particular locations. Their innovation allowed the location to be modified as part of the operation of executing the instruction. This made it very easy to address elements within an array sequentially.
At the University of Cambridge, meanwhile, Maurice Wilkes and others built what is recognized as the first full-size, fully electronic, stored-program computer to provide a formal computing service for users. The Electronic Delay Storage Automatic Calculator (EDSAC) was built on the set of principles synthesized by von Neumann and, like the Manchester Mark I, became operational in 1949. Wilkes built the machine chiefly to study programming issues, which he realized would become as important as the hardware details.
Whirlwind
New hardware continued to be invented, though. In the United States, Jay Forrester of the Massachusetts Institute of Technology (MIT) and Jan Aleksander Rajchman of the Radio Corporation of America came up with a new kind of memory based on magnetic cores that was fast enough to enable MIT to build the first real-time computer, Whirlwind. A real-time computer is one that can respond seemingly instantly to basic instructions, thus allowing an operator to interact with a “running” computer.
UNIVAC
After leaving the Moore School, Eckert and Mauchly struggled to obtain capital to build their latest design, a computer they called the Universal Automatic Computer, or UNIVAC. (In the meantime, they contracted with the Northrop Corporation to build the Binary Automatic Computer, or BINAC, which, when completed in 1949, became the first American stored-program computer.) The partners delivered the first UNIVAC to the U.S. Bureau of the Census in March 1951, although their company, their patents, and their talents had been acquired by Remington Rand, Inc., in 1950. Although it owed something to experience with ENIAC, UNIVAC was built from the start as a stored-program computer, so it was really different architecturally. It used an operator keyboard and console typewriter for input and magnetic tape for all other input and output. Printed output was recorded on tape and then printed by a separate tape printer.
The UNIVAC I was designed as a commercial data-processing computer, intended to replace the punched-card accounting machines of the day. It could read 7,200 decimal digits per second (it did not use binary numbers), making it by far the fastest business machine yet built. Its use of Eckert’s mercury delay lines greatly reduced the number of vacuum tubes needed (to 5,000), thus enabling the main processor to occupy a “mere” 14.5 by 7.5 by 9 feet (approximately 4.4 by 2.3 by 2.7 metres) of space. It was a true business machine, signaling the convergence of academic computational research with the office automation trend of the late 19th and early 20th centuries. As such, it ushered in the era of “Big Iron”—or large, mass-produced computing equipment.
The age of Big Iron
A snapshot of computer development in the early 1950s would have to show a number of companies and laboratories in competition—technological competition and increasingly earnest business competition—to produce the few computers then demanded for scientific research. Several computer-building projects had been launched immediately after the end of World War IIin 1945, primarily in the United States and Britain. These projects were inspired chiefly by a 1946 document, “Preliminary Discussion of the Logical Design of an Electronic Digital Computing Instrument,” produced by a group working under the direction of mathematician John von Neumann of the Institute for Advanced Study at Princeton University. The IAS paper, as von Neumann’s document became known, articulated the concept of the stored program—a concept that has been called the single largest innovation in the history of the computer. (Von Neumann’s principles are described earlier, in the section Toward the classical computer.) Most computers built in the years following the paper’s distribution were designed according to its plan, yet by 1950 there were still only a handful of working stored-program computers.
Business use at this time was marginal because the machines were so hard to use. Although computer makers such as Remington Rand, the Burroughs Adding Machine Company, and IBM had begun building machines to the IAS specifications, it was not until 1954 that a real market for business computers began to emerge. The IBM 650, delivered at the end of 1954 for colleges and businesses, was a decimal implementation of the IAS design. With this low-cost magnetic drum computer, which sold for about $200,000 apiece (compared with about $1,000,000 for the scientific model, the IBM 701), IBM had a hit, eventually selling about 1,800 of them. In addition, by offering universities that taught computer science courses around the IBM 650 an academic discount program (with price reductions of up to 60 percent), IBM established a cadre of engineers and programmers for their machines. (Apple later used a similar discount strategy in American grade schools to capture a large proportion of the early microcomputer market.)
A snapshot of the era would also have to show what could be called the sociology of computing. The actual use of computers was restricted to a small group of trained experts, and there was resistance to the idea that this group should be expanded by making the machines easier to use. Machine time was expensive, more expensive than the time of the mathematicians and scientists who needed to use the machines, and computers could process only one problem at a time. As a result, the machines were in a sense held in higher regard than the scientists. If a task could be done by a person, it was thought that the machine’s time should not be wasted with it. The public’s perception of computers was not positive either. If motion pictures of the time can be used as a guide, the popular image was of a room-filling brain attended by white-coated technicians, mysterious and somewhat frightening—about to eliminate jobs through automation.
Yet the machines of the early 1950s were not much more capable than Charles Babbage’s Analytical Engine of the 1830s (although they were much faster). Although in principle these were general-purpose computers, they were still largely restricted to doing tough math problems. They often lacked the means to perform logical operations, and they had little text-handling capability—for example, lowercase letters were not even representable in the machines, even if there were devices capable of printing them.
These machines could be operated only by experts, and preparing a problem for computation (what would be called programming today) took a long time. With only one person at a time able to use a machine, major bottlenecks were created. Problems lined up like experiments waiting for a cyclotron or the space shuttle. Much of the machine’s precious time was wasted because of this one-at-a-time protocol.
In sum, the machines were expensive and the market was still small. To be useful in a broader business market or even in a broader scientific market, computers would need application programs: word processors, databaseprograms, and so on. These applications in turn would require programming languages in which to write them and operating systems to manage them.
Programming languages
Early computer language development
Machine language
One implication of the stored-program model was that programs could read and operate on other programs as data; that is, they would be capable of self-modification. Konrad Zuse had looked upon this possibility as “making a contract with the Devil” because of the potential for abuse, and he had chosen not to implement it in his machines. But self-modification was essential for achieving a true general-purpose machine.
One of the very first employments of self-modification was for computer language translation, “language” here referring to the instructions that make the machine work. Although the earliest machines worked by flipping switches, the stored-program machines were driven by stored coded instructions, and the conventions for encoding these instructions were referred to as the machine’s language.
Writing programs for early computers meant using the machine’s language. The form of a particular machine’s language is dictated by its physical and logical structure. For example, if the machine uses registers to store intermediate results of calculations, there must be instructions for moving data between such registers.
The vocabulary and rules of syntax of machine language tend to be highly detailed and very far from the natural or mathematical language in which problems are normally formulated. The desirability of automating the translation of problems into machine language was immediately evident to users, who either had to become computer experts and programmers themselves in order to use the machines or had to rely on experts and programmers who might not fully understand the problems they were translating.
Automatic translation from pure mathematics or some other “high-level language” to machine language was therefore necessary before computers would be useful to a broader class of users. As early as the 1830s, Charles Babbage and Lady Lovelace had recognized that such translation could be done by machine (see the earlier section Lady Lovelace, the first programmer), but they made no attempt to follow up on this idea and simply wrote their programs in machine language.
Howard Aiken, working in the 1930s, also saw the virtue of automated translation from a high-level language to machine language. Aiken proposed a coding machine that would be dedicated to this task, accepting high-level programs and producing the actual machine-language instructions that the computer would process.
But a separate machine was not actually necessary. The IAS model guaranteed that the stored-program computer would have the power to serve as its own coding machine. The translator program, written in machine language and running on the computer, would be fed the target program as data, and it would output machine-language instructions. This plan was altogether feasible, but the cost of the machines was so great that it was not seen as cost-effective to use them for anything that a human could do—including program translation.
Two forces, in fact, argued against the early development of high-level computer languages. One was skepticism that anyone outside the “priesthood” of computer operators could or would use computers directly. Consequently, early computer makers saw no need to make them more accessible to people who would not use them anyway. A second reason was efficiency. Any translation process would necessarily add to the computing time necessary to solve a problem, and mathematicians and operators were far cheaper by the hour than computers.
Programmers did, though, come up with specialized high-level languages, or HLLs, for computer instruction—even without automatic translators to turn their programs into machine language. They simply did the translation by hand. They did this because casting problems in an intermediate programming language, somewhere between mathematics and the highly detailed language of the machine, had the advantage of making it easier to understand the program’s logical structure and to correct, or debug, any defects in the program.
The early HLLs thus were all paper-and-pencil methods of recasting problems in an intermediate form that made it easier to write code for a machine. Herman Goldstine, with contributions from his wife, Adele Goldstine, and from John von Neumann, created a graphical representation of this process: flow diagrams. Although the diagrams were only a notational device, they were widely circulated and had great influence, evolving into what are known today as flowcharts.
Zuse’s Plankalkül
Konrad Zuse developed the first real programming language, Plankalkül (“Plan Calculus”), in 1944–45. Zuse’s language allowed for the creation of procedures (also called routines or subroutines; stored chunks of code that could be invoked repeatedly to perform routine operations such as taking a square root) and structured data (such as a record in a database, with a mixture of alphabetic and numeric data representing, for instance, name, address, and birth date). In addition, it provided conditional statements that could modify program execution, as well as repeat, or loop, statements that would cause a marked block of statements or a subroutine to be repeated a specified number of times or for as long as some condition held.
Zuse knew that computers could do more than arithmetic, but he was aware of the propensity of anyone introduced to them to view them as nothing more than calculators. So he took pains to demonstrate nonnumeric solutions with Plankalkül. He wrote programs to check the syntactical correctness of Boolean expressions (an application in logic and text handling) and even to check chess moves.
Unlike flowcharts, Zuse’s program was no intermediate language intended for pencil-and-paper translation by mathematicians. It was deliberately intended for machine translation, and Zuse did some work toward implementing a translator for Plankalkül. He did not get very far, however; he had to disassemble his machine near the end of the war and was not able to put it back together and work on it for several years. Unfortunately, his language and his work, which were roughly a dozen years ahead of their time, were not generally known outside Germany.
Interpreters
HLL coding was attempted right from the start of the stored-program era in the late 1940s. Shortcode, or short-order code, was the first such language actually implemented. Suggested by John Mauchly in 1949, it was implemented by William Schmitt for the BINAC computer in that year and for UNIVAC in 1950. Shortcode went through multiple steps: first it converted the alphabetic statements of the language to numeric codes, and then it translated these numeric codes into machine language. It was an interpreter, meaning that it translated HLL statements and executed, or performed, them one at a time—a slow process. Because of their slow execution, interpreters are now rarely used outside of program development, where they may help a programmer to locate errors quickly.
Compilers
An alternative to this approach is what is now known as compilation. In compilation, the entire HLL program is converted to machine language and stored for later execution. Although translation may take many hours or even days, once the translated program is stored, it can be recalled anytime in the form of a fast-executing machine-language program.
In 1952 Heinz Rutishauser, who had worked with Zuse on his computers after the war, wrote an influential paper, “Automatische Rechenplanfertigung bei programmgesteuerten Rechenmaschinen” (loosely translatable as “Computer Automated Conversion of Code to Machine Language”), in which he laid down the foundations of compiler construction and described two proposed compilers. Rutishauser was later involved in creating one of the most carefully defined programming languages of this early era, ALGOL. (See next section, FORTRAN, COBOL, and ALGOL.)
Then, in September 1952, Alick Glennie, a student at the University of Manchester, England, created the first of several programs called Autocode for the Manchester Mark I. Autocode was the first compiler actually to be implemented. (The language that it compiled was called by the same name.) Glennie’s compiler had little influence, however. When J. Halcombe Laning created a compiler for the Whirlwind computer at the Massachusetts Institute of Technology (MIT) two years later, he met with similar lack of interest. Both compilers had the fatal drawback of producing code that ran slower (10 times slower, in the case of Laning’s) than code handwritten in machine language.
FORTRAN, COBOL, and ALGOL
Grace Murray Hopper
While the high cost of computer resources placed a premium on fast hand-coded machine-language programs, one individual worked tirelessly to promote high-level programming languages and their associated compilers. Grace Murray Hopper taught mathematics at Vassar College, Poughkeepsie, New York, from 1931 to 1943 before joining the U.S. Naval Reserve. In 1944 she was assigned to the Bureau of Ordnance Computation Project at Harvard University, where she programmed the Mark I under the direction of Howard Aiken. After World War II she joined J. Presper Eckert, Jr., and John Mauchly at their new company and, among other things, wrote compiler software for the BINAC and UNIVAC systems. Throughout the 1950s Hopper campaigned earnestly for high-level languages across the United States, and through her public appearances she helped to remove resistance to the idea. Such urging found a receptive audience at IBM, where the management wanted to add computers to the company’s successful line of business machines.
IBM develops FORTRAN
In the early 1950s John Backus convinced his managers at IBM to let him put together a team to design a language and write a compiler for it. He had a machine in mind: the IBM 704, which had built-in floating-point math operations. That the 704 used floating-point representation made it especially useful for scientific work, and Backus believed that a scientifically oriented programming language would make the machine even more attractive. Still, he understood the resistance to anything that slowed a machine down, and he set out to produce a language and a compiler that would produce code that ran virtually as fast as hand-coded machine language—and at the same time made the program-writing process a lot easier.
By 1954 Backus and a team of programmers had designed the language, which they called FORTRAN (Formula Translation). Programs written in FORTRAN looked a lot more like mathematics than machine instructions:
DO 10 J = 1,11
I = 11 − J
Y = F(A(I + 1))
IF (400 − Y) 4,8,8
4 PRINT 5,1
5 FORMAT (I10, 10H TOO LARGE)
The compiler was written, and the language was released with a professional-looking typeset manual (a first for programming languages) in 1957.
FORTRAN took another step toward making programming more accessible, allowing comments in the programs. The ability to insert annotations, marked to be ignored by the translator program but readable by a human, meant that a well-annotated program could be read in a certain sense by people with no programming knowledge at all. For the first time a nonprogrammer could get an idea what a program did—or at least what it was intended to do—by reading (part of) the code. It was an obvious but powerful step in opening up computers to a wider audience.
FORTRAN has continued to evolve, and it retains a large user base in academiaand among scientists.
COBOL
About the time that Backus and his team invented FORTRAN, Hopper’s group at UNIVAC released Math-matic, a FORTRAN-like language for UNIVAC computers. It was slower than FORTRAN and not particularly successful. Another language developed at Hopper’s laboratory at the same time had more influence. Flow-matic used a more English-like syntax and vocabulary:
1 COMPARE PART-NUMBER (A) TO PART-NUMBER (B);
IF GREATER GO TO OPERATION 13;
IF EQUAL GO TO OPERATION 4;
OTHERWISE GO TO OPERATION 2.
Flow-matic led to the development by Hopper’s group of COBOL (Common Business-Oriented Language) in 1959. COBOL was explicitly a business programming language with a very verbose English-like style. It became central to the wide acceptance of computers by business after 1959.
ALGOL
Although both FORTRAN and COBOL were universal languages (meaning that they could, in principle, be used to solve any problem that a computer could unravel), FORTRAN was better suited for mathematicians and engineers, whereas COBOL was explicitly a business programming language.
During the late 1950s a multitude of programming languages appeared. This proliferation of incompatible specialized languages spurred an interest in the United States and Europe to create a single “second-generation” language. A transatlantic committee soon formed to determine specifications for ALGOL (Algorithmic Language), as the new language would be called. Backus, on the American side, and Heinz Rutishauser, on the European side, were among the most influential committee members.
Although ALGOL introduced some important language ideas, it was not a commercial success. Customers preferred a known specialized language, such as FORTRAN or COBOL, to an unknown general-programming language. Only Pascal, a scientific programming-language offshoot of ALGOL, survives.
Operating systems
Control programs
In order to make the early computers truly useful and efficient, two major innovations in software were needed. One was high-level programming languages (as described in the preceding section, FORTRAN, COBOL, and ALGOL). The other was control. Today the systemwide control functions of a computer are generally subsumed under the term operating system, or OS. An OS handles the behind-the-scenes activities of a computer, such as orchestrating the transitions from one program to another and managing access to disk storage and peripheral devices.
The need for some kind of supervisor program was quickly recognized, but the design requirements for such a program were daunting. The supervisor program would have to run in parallel with an application program somehow, monitor its actions in some way, and seize control when necessary. Moreover, the essential—and difficult—feature of even a rudimentary supervisor program was the interrupt facility. It had to be able to stop a running program when necessary but save the state of the program and all registers so that after the interruption was over the program could be restarted from where it left off.
The first computer with such a true interrupt system was the UNIVAC 1103A, which had a single interrupt triggered by one fixed condition. In 1959 the Lincoln Labs TX2 generalized the interrupt capability, making it possible to set various interrupt conditions under software control. However, it would be one company, IBM, that would create, and dominate, a market for business computers. IBM established its primacy primarily through one invention: the IBM 360 operating system.
The IBM 360
IBM had been selling business machines since early in the century and had built Howard Aiken’s computer to his architectural specifications. But the company had been slow to implement the stored-program digital computerarchitecture of the early 1950s. It did develop the IBM 650, a (like UNIVAC) decimal implementation of the IAS plan—and the first computer to sell more than 1,000 units.
The invention of the transistor in 1947 led IBM to reengineer its early machines from electromechanical or vacuum tube to transistor technology in the late 1950s (although the UNIVAC Model 80, delivered in 1958, was the first transistor computer). These transistorized machines are commonly referred to as second-generation computers.
Two IBM inventions, the magnetic disk and the high-speed chain printer, led to an expansion of the market and to the unprecedented sale of 12,000 computers of one model: the IBM 1401. The chain printer required a lot of magnetic core memory, and IBM engineers packaged the printer support, core memory, and disk support into the 1401, one of the first computers to use this solid-state technology.
IBM had several lines of computers developed by independent groups of engineers within the company: a scientific-technical line, a commercial data-processing line, an accounting line, a decimal machine line, and a line of supercomputers. Each line had a distinct hardware-dependent operating system, and each required separate development and maintenance of its associated application software. In the early 1960s IBM began designing a machine that would take the best of all these disparate lines, add some new technology and new ideas, and replace all the company’s computers with one single line, the 360. At an estimated development cost of $5 billion, IBM literally bet the company’s future on this new, untested architecture.
The 360 was in fact an architecture, not a single machine. Designers G.M. Amdahl, F.P. Brooks, and G.A. Blaauw explicitly separated the 360 architecture from its implementation details. The 360 architecture was intended to span a wide range of machine implementations and multiple generations of machines. The first 360 models were hybrid transistor–integrated circuit machines. Integrated circuit computers are commonly referred to as third-generation computers.
Key to the architecture was the operating system. OS/360 ran on all machines built to the 360 architecture—initially six machines spanning a wide range of performance characteristics and later many more machines. It had a shielded supervisory system (unlike the 1401, which could be interfered with by application programs), and it reserved certain operations as privileged in that they could be performed only by the supervisor program.
The first IBM 360 computers were delivered in 1965. The 360 architecture represented a continental divide in the relative importance of hardware and software. After the 360, computers were defined by their operating systems.
The market, on the other hand, was defined by IBM. In the late 1950s and into the 1960s, it was common to refer to the computer industry as “IBM and the Seven Dwarfs,” a reference to the relatively diminutive market share of its nearest rivals—Sperry Rand (UNIVAC), Control Data Corporation (CDC), Honeywell, Burroughs, General Electric (GE), RCA, and National Cash Register Co. During this time IBM had some 60–70 percent of all computer sales. The 360 did nothing to lessen the giant’s dominance. When the market did open up somewhat, it was not due to the efforts of, nor was it in favour of, the dwarfs. Yet, while “IBM and the Seven Dwarfs” (soon reduced to “IBM and the BUNCH of Five,” BUNCH being an acronym for Burroughs, UNIVAC, NCR, CDC, and Honeywell) continued to build Big Iron, a fundamental change was taking place in how computers were accessed.
Elliott 803 computerLearn about the Elliott 803, a transistorized computer made in the United Kingdom in the early 1960s.© Open University (
A Britannica Publishing Partner
)
ICL 2966 computerLearn about the ICL 2966, a mainframe computer utilizing integrated circuit technology, made in the United Kingdom in the 1980s.© Open University (
A Britannica Publishing Partner
)
1 note · View note
componentplanet · 4 years
Text
Fujitsu Has an Employee Who Keeps a 1959 Computer Running
If you’ve ever worked for a company, you’re probably aware that they tend to keep computers running after they should’ve been replaced with something newer, faster, and/or less buggy. Fujitsu Tokki Systems Ltd, however, takes that concept farther than most. The company still has a fully functional computer it installed back in 1959, the FACOM128B. Even more impressive, it still has an employee on staff whose job is to keep the machine in working order.
The FACOM128B is derived from the FACOM100, described as “Japan’s first practical relay-based automatic computer.” The 100, an intermediate predecessor known as the 128A, and the 128B were classified as electromechanical computers based on the same kind of relays that were typically used in telephone switches. Technologically, the FACOM 128B wasn’t particularly cutting-edge even when constructed; vacuum tube designs were already becoming popular by the mid-1950s. Most of the computers that used electromechanical relays were early efforts, like the Harvard Mark I (built in 1944), or one-off machines rather than commercialized designs.
Relay computers did have advantages, however, even in the mid-to-late 1950s. Relay computers were not as fast as vacuum-tube-powered machines, but they were significantly more reliable. Performance also appears to have continued to improve in these designs as well, though finding exact comparison figures for performance on early computers can be difficult. Software, as we understand the term today, barely existed in the 1950s. Not all computers were capable of storing programs, and computers were often custom-built for specific purposes as unique designs, with significant differences in basic parameters.
Wikipedia notes, however, that the Harvard Mark I was capable of “3 additions or subtractions in a second. A multiplication took 6 seconds, a division took 15.3 seconds, and a logarithm or a trigonometric function took over one minute.” The FACOM128B was faster than this, with 5-10 additions or subtractions per second. Division and multiplication were also significantly faster.
Image and data from the IPSJ Computer Museum
The man responsible for maintaining the FACOM128B, Tadao Hamada, believes that the work he does to keep the system running is a vital part of protecting Japan’s computing heritage and making sure future students can see functional examples of where we came from, not just collections of parts in a box. Hamada has pledged to maintain the system forever. A year ago, the FACOM128B was registered as “Essential Historical Materials for Science and Technology” by the Japanese National Museum of Nature and Science. The goal of the museum, according to Fujitsu, is “to select and preserve materials representing essential results in the development of science and technology, that are important to pass on to future generations, and that have had a remarkable impact on the shape of the Japanese economy, society, culture, and the lifestyles of its citizens.”
A video of the FACOM128B in-action can be seen below:
youtube
The FACOM128B was used to design camera lenses and the YS-11, the first and only post-war airliner to be wholly developed and manufactured in Japan until the Mitsubishi SpaceJet. While the YS-11 aircraft was not commercially successful, this wasn’t the result of poor computer modeling; the FACOM128B was considered to be a highly reliable computer. Fujitsu’s decision to keep the machine in working order was itself part of a larger program, begun in 2006. The company writes:
The Fujitsu Relay-type Computer Technology Inheritance Project began activities in October 2006, with the goal of conveying the thoughts and feelings of the technical personnel involved in its development and production to the next generation by continuing to operate the relay-type computer. In this project, the technical personnel involved in the design, production, maintenance, and operation of the computer worked with current technical personnel to keep both the FACOM128B, which is fast approaching its 60th anniversary, and its sister machine, the FACOM138A, in an operational state.
Credit: Fujitsu
Hamada has been working on the electromechanical computer since the beginning of this program. He notes that in the beginning, he had to learn how to translate the diagrams the machine’s original operators had used. Asked why he believes maintaining the machine is so important, he stated: “If the computer does not work, it will become a mere ornament,” said Hamada. “What people feel and what they see are different among different individuals. The difference cannot be identified unless it is kept operational.”
It’s always interesting to revisit what’s been done with older hardware or off-the-wall computer projects, and I can actually see Hamada’s point. Sometimes, looking at older or different technology is a window into how a device functions. Other times, it gives you insight into the minds of the people that built the machine and the problems they were attempting to solve.
One of my favorite off-the-wall projects was the Megaprocessor back in 2016, a giant CPU you could actually see, with each individual block implemented in free-standing panels. Being able to see data being passed across a physical bus is an excellent way to visualize what’s happening inside a CPU core. While maintaining the FACOM128B doesn’t offer that kind of access, it does illustrate how computers worked when we were building them from very different materials and strategies than we use today.
Update (5/18/2020): Since we first ran this story, YouTuber CuriousMarc arranged for a visit to Fujitsu and an extensive discussion of the machine. You can see his full video below. It’s a bit lengthy, but it dives into the history of the system and Hamada himself.
youtube
Now Read:
Meet the Megaprocessor: A 20kHz behemoth CPU you can actually see in action
Apollo Guidance Computer Restored, Used to Mine Bitcoin
Inside IBM’s $67 billion SAGE, the largest computer ever built
from ExtremeTechExtremeTech https://www.extremetech.com/computing/296022-fujitsu-has-an-employee-dedicated-to-keeping-a-1959-computer-up-and-running from Blogger http://componentplanet.blogspot.com/2020/05/fujitsu-has-employee-who-keeps-1959.html
0 notes
filiplig · 6 years
Text
Gleick, James - The Information: A History, a Theory, a Flood
str. 23
In the name of speed, Morse and Vail had realized that they could save strokes by reserving the shorter sequences of dots and dashes for the most common letters. But which letters would be used most often? Little was known about the alphabet’s statistics. In search of data on the letters’ relative frequencies, Vail was inspired to visit the local newspaper office in Morristown, New Jersey, and look over the type cases. He found a stock of twelve thousand E’s, nine thousand T’s, and only two hundred Z’s. He and Morse rearranged the alphabet accordingly. They had originally used dash-dash-dot to represent T, the second most common letter; now they promoted T to a single dash, thus saving telegraph operators uncountable billions of key taps in the world to come. Long afterward, information theorists calculated that they had come within 15 percent of an optimal arrangement for telegraphing English text.
str. 27
Neither Kele nor English yet had words to say, allocate extra bits for disambiguation and error correction. Yet this is what the drum language did. Redundancy—inefficient by definition—serves as the antidote to confusion. It provides second chances. Every natural language has redundancy built in; this is why people can understand text riddled with errors and why they can understand conversation in a noisy room.
str. 27
After publishing his book, John Carrington came across a mathematical way to understand this point. A paper by a Bell Labs telephone engineer, Ralph Hartley, even had a relevant-looking formula: H = n log s, where H is the amount of information, n is the number of symbols in the message, and s is the number of symbols available in the language. [...] The formula quantified a simple enough phenomenon (simple, anyway, once it was noticed): the fewer symbols available, the more of them must be transmitted to get across a given amount of information. For the African drummers, messages need to be about eight times as long as their spoken equivalents.
str. 74
In Napier’s mind was an analogy: differences are to ratios as addition is to multiplication. His thinking crossed over from one plane to another, from spatial relationships to pure numbers. Aligning these scales side by side, he gave a calculator a practical means of converting multiplication into addition—downshifting, in effect, from the difficult task to the easier one. In a way, the method is a kind of translation, or encoding. The natural numbers are encoded as logarithms. The calculator looks them up in a table, the code book. In this new language, calculation is easy: addition instead of multiplication, or multiplication instead of exponentiation. When the work is done, the result is translated back into the language of natural numbers. Napier, of course, could not think in terms of encoding.
str. 136
Signs and symbols were not just placeholders; they were operators, like the gears and levers in a machine. Language, after all, is an instrument. It was seen distinctly now as an instrument with two separate functions: expression and thought. Thinking came first, or so people assumed. To Boole, logic was thought—polished and purified.
str. 148
To eliminate Russell’s paradox Russell took drastic measures. The enabling factor seemed to be the peculiar recursion within the offending statement: the idea of sets belonging to sets. Recursion was the oxygen feeding the flame. In the same way, the liar paradox relies on statements about statements. “This statement is false” is meta-language: language about language. Russell’s paradoxical set relies on a meta-set: a set of sets. So the problem was a crossing of levels, or, as Russell termed it, a mixing of types. His solution: declare it illegal, taboo, out of bounds. No mixing different levels of abstraction. No self-reference; no self-containment. The rules of symbolism in Principia Mathematica would not allow the reaching-back-around, snake-eating-its-tail feedback loop that seemed to turn on the possibility of self-contradiction. This was his firewall.
s. 163
It seemed intuitively clear that the amount of information should be proportional to the number of symbols: twice as many symbols, twice as much information. But a dot or dash—a symbol in a set with just two members—carries less information than a letter of the alphabet and much less information than a word chosen from a thousand-word dictionary. The more possible symbols, the more informationeach selection carries.
s. 170
Turing was programming his machine, though he did not yet use that word. From the primitive actions—moving, printing, erasing, changing state, and stopping—larger processes were built up, and these were used again and again: “copying down sequences of symbols, comparing sequences, erasing all symbols of a given form, etc.” The machine can see just one symbol at a time, but can in effect use parts of the tape to store information temporarily. As Turing put it, “Some of the symbols written down … are just rough notes ‘to assist the memory.’ ”The tape, unfurling to the horizon and beyond, provides an unbounded record. In this way all arithmetic lies within the machine’s grasp. Turing showed how to add a pair of numbers—that is, he wrote out the necessary table of states. He showed how to make the machine print out (endlessly) the binary representation of Π. He spent considerable time working out what the machine could do and how it would accomplish particular tasks. He demonstrated that this short list covers everything a person does in computing a number. No other knowledge or intuition is necessary. Anything computable can be computed by this machine. Then came the final flourish. Turing’s machines, stripped down to a finite table of states and a finite set of input, could themselves be represented as numbers. Every possible state table, combined with its initial tape, represents a different machine. Each machine itself, then, can be described by a particular number—a certain state table combined with its initial tape. Turing was encoding his machines just as Gödel had encoded the language of symbolic logic. This obliterated the distinction between data and instructions: in the end they were all numbers. For every computable number, there must be a corresponding machine number.
s. 172
So Turing’s computer—a fanciful, abstract, wholly imaginary machine—led him to a proof parallel to Gödel’s. Turing went further than Gödel by defining the general concept of a formal system. Any mechanical procedure for generating formulas is essentially a Turing machine. Any formal system, therefore, must have undecidable propositions. Mathematics is not decidable. Incompleteness follows from uncomputability.
s. 178
Information is uncertainty, surprise, difficulty, and entropy:  “Information is closely associated with uncertainty.” Uncertainty, in turn, can be measured by counting the number of possible messages. If only one message is possible, there is no uncertainty and thus no information.
Some messages may be likelier than others, and information implies surprise. Surprise is a way of talking about probabilities. If the letter following t (in English) is h, not so much information is conveyed, because the probability of h was relatively high.
“What is significant is the difficulty in transmitting the message from one point to another.” Perhaps this seemed backward, or tautological, like defining mass in terms of the force needed to move an object. But then, mass can be defined that way.
Information is entropy. This was the strangest and most powerful notion of all. Entropy—already a difficult and poorly understood concept—is a measure of disorder in thermodynamics, the science of heat and energy.
s. 185
This is where the statistical structure of natural languages reenters the picture. If the thousand-character message is known to be English text, the number of possible messages is smaller—much smaller. Looking at correlations extending over eight letters, Shannon estimated that English has a built-in redundancy of about 50 percent: that each new character of a message conveys not 5 bits but only about 2.3. Considering longer-range statistical effects, at the level of sentences and paragraphs, he raised that estimate to 75 percent [...]
s. 186
Quantifying predictability and redundancy in this way is a backward way of measuring information content. If a letter can be guessed from what comes before, it is redundant; to the extent that it is redundant, it provides no new information. If English is 75 percent redundant, then a thousand-letter message in English carries only 25 percent as much information as one thousand letters chosen at random. Paradoxical though it sounded, random messages carry more information. The implication was that natural-language text could be encoded more efficiently for transmission or storage.
s. 199
There was a difference in emphasis between Shannon and Wiener. For Wiener, entropy was a measure of disorder; for Shannon, of uncertainty. Fundamentally, as they were realizing, these were the same. The more inherent order exists in a sample of English text—order in the form of statistical patterns, known consciously or unconsciously to speakers of the language—the more predictability there is, and in Shannon’s terms, the less information is conveyed by each subsequent letter. When the subject guesses the next letter with confidence, it is redundant, and the arrival of the letter contributes no new information. Information is surprise.
s. 216
A hot stone plunged into cold water can generate work—for example, by creating steam that drives a turbine—but the total heat in the system (stone plus water) remains constant. Eventually, the stone and the water reach the same temperature. No matter how much energy a closed system contains, when everything is the same temperature, no work can be done. It is the unavailability of this energy—its uselessness for work—that Clausius wanted to measure. He came up with the word entropy, formed from Greek to mean “transformation content.”
s. 217
It became a totemic concept. With entropy, the “laws” of thermodynamics could be neatly expressed: First law: The energy of the universe is constant. Second law: The entropy of the universe always increases. There are many other formulations of these laws, from the mathematical to the whimsical, e.g., “1. You can’t win; 2. You can’t break even either.”
s. 218
Order is subjective—in the eye of the beholder. Order and confusion are not the sorts of things a mathematician would try to define or measure. Or are they? If disorder corresponded to entropy, maybe it was ready for scientific treatment after all.
s. 222
The demon sees what we cannot—because we are so gross and slow—namely, that the second law is statistical, not mechanical. At the level of molecules, it is violated all the time, here and there, purely by chance. The demon replaces chance with purpose. It uses information to reduce entropy.
s. 224
But information is physical. Maxwell’s demon makes the link. The demon performs a conversion between information and energy, one particle at a time. Szilárd—who did not yet use the word information—found that, if he accounted exactly for each measurement and memory, then the conversion could be computed exactly. So he computed it. He calculated that each unit of information brings a corresponding increase in entropy—specifically, by k log 2 units. Every time the demon makes a choice between one particle and another, it costs one bit of information. The payback comes at the end of the cycle, when it has to clear its memory (Szilárd did not specify this last detail in words, but in mathematics). Accounting for this properly is the only way to eliminate the paradox of perpetual motion, to bring the universe back into harmony, to “restore concordance with the Second Law.”
s. 229
The earth is not a closed system, and life feeds upon energy and negative entropy leaking into the earth system.… The cycle reads: first, creation of unstable equilibriums (fuels, food, waterfalls, etc.); then use of these reserves by all living creatures. 
Living creatures confound the usual computation of entropy. More generally, so does information. “Take an issue of The New York Times, the book on cybernetics, and an equal weight of scrap paper,” suggested Brillouin. “Do they have the same entropy?” If you are feeding the furnace, yes. But not if you are a reader. There is entropy in the arrangement of the ink spots. For that matter, physicists themselves go around transforming negative entropy into information, said Brillouin. From observations and measurements, the physicist derives scientific laws; with these laws, people create machines never seen in nature, with the most improbable structures.
s. 236
By now the word code was so deeply embedded in the conversation that people seldom paused to notice how extraordinary it was to find such a thing—abstract symbols representing arbitrarily different abstract symbols—at work in chemistry, at the level of molecules. The genetic code performed a function with uncanny similarities to the metamathematical code invented by Gödel for his philosophical purposes. Gödel’s code substitutes plain numbers for mathematical expressions and operations; the genetic code uses triplets of nucleotides to represent amino acids. Douglas Hofstadter was the first to make this connection explicitly, in the 1980s: “between the complex machinery in a living cell that enables a DNA molecule to replicate itself and the clever machinery in a mathematical system that enables a formula to say things about itself.”
s. 256
“Memes have not yet found their Watson and Crick,” said Dawkins; “they even lack their Mendel.”
s. 259
Wheeler said this much, at least: “Probability, like time, is a concept invented by humans, and humans have to bear the responsibility for the obscurities that attend it.”
s. 267
“At each given moment there is only a fine layer between the ‘trivial’ and the impossible,” Kolmogorov mused in his diary.
s. 268
The three are fundamentally equivalent: information, randomness, and complexity—three powerful abstractions, bound all along like secret lovers.
s. 271
It is another recursive, self-looping twist. This was Chaitin’s version of Gödel’s incompleteness. Complexity, defined in terms of program size, is generally uncomputable. Given an arbitrary string of a million digits, a mathematician knows that it is almost certainly random, complex, and patternless—but cannot be absolutely sure.
s. 272
As Chaitin put it, “God not only plays dice in quantum mechanics and nonlinear dynamics, but even in elementary number theory.”
s. 272
Kolmogorov-Chaitin (KC) complexity is to mathematics what entropy is to thermodynamics: the antidote to perfection. Just as we can have no perpetual-motion machines, there can be no complete formal axiomatic systems.
s. 280
According to this measure, a million zeroes and a million coin tosses lie at opposite ends of the spectrum. The empty string is as simple as can be; the random string is maximally complex. The zeroes convey no information; coin tosses produce the most information possible. Yet these extremes have something in common. They are dull. They have no value. If either one were a message from another galaxy, we would attribute no intelligence to the sender. If they were music, they would be equally worthless. Everything we care about lies somewhere in the middle, where pattern and randomness interlace.
s. 282
The more energy, the faster the bits flip. Earth, air, fire, and water in the end are all made of energy, but the different forms they take are determined by information. To do anything requires energy. To specify what is done requires information. —Seth Lloyd (2006)
0 notes
componentplanet · 4 years
Text
How an Article on Game Difficulty Explained My Own Modding, 18 Years Later
As the pandemic leaves an awful lot of people at home with not-much to do, we’re resurfacing some older coverage on topics and news that isn’t particularly time-sensitive. Last fall, I read an article that literally explained to me why I got into modding two decades ago. If you’ve been a PC modder yourself or simply enjoy using mods, you might find it an interesting discussion of the topic. 
A game’s difficulty level can make or break the title. Games that are perceived as too difficult become boring, depressing grinds, while games that are too easy become boring and tedious, with little challenge. One of the most profound differences between World of Warcraft Classic and Retail is the difference in difficulty. Of course, every player has their own ideas about how hard a game should be, but there’s no arguing that the difficulty of a title is important.
But according to game developer Jennifer Scheurle, game developers think about game difficulty very differently than players do, which may be part of why conversations on this topic sometimes seem to break down. Her piece resonated with me, partly because it reminded me of the reasons why I became a game modder, once upon a time. According to Scheurle, difficulty is all about trust.
“At the core of the difference between how game designers and players speak about difficulty,” she writes, “is the fact that we discuss it in terms of skill progression. All difficulty design is essentially that: crafting how players will learn, apply skills, and progress through challenges.”
Graphic by Jennifer Scheurle for Polygon
She then walks through examples of how this plays out in games, using the Dark Souls series as an example. DS games ask you to accept that you will die (frequently) as part of learning how encounters function. You aren’t simply being killed by mechanics you can’t master, beat, or counter, you’re learning how the game functions and how to counter incoming attacks. The game, in turn, obeys its own internal rules. Players often become angry at a game if they feel it isn’t holding up its end of the bargain in some particular, whether that refers to drop rates, spawn rates, boss difficulty, or the damage you take versus the damage you deal. She also discusses the importance of how a game teaches players to play it, and the various in-game ways that developers communicate game difficulty and associated rules. It’s a very different view of the topic than simply boiling it down into whether a game is “hard” or “easy,” and it leads to a much more nuanced view of how and why different titles may put difficulty in different places.
The article resonated with me in part because it describes part of why I became a Diablo II modder and taught me something about my own motivation. I don’t want to seem as if I’m hijacking Scheurle’s excellent discussion of game difficulty because it’s worth a read in its own right, but I’m going to switch gears a bit and talk about my own experience. To put it simply: I was pissed.
Diablo II’s Trust Fail
This was the early days of Diablo II, before the Lord of Destruction expansion had even come out. Patch 1.03 dropped not long before I started modding, to put a date on things. On Normal difficulty, Diablo II worked pretty well, but as you progressed into Nightmare and Hell difficulty modes, deficiencies became apparent.
Back then, Diablo II used a linear leveling curve in which the amount of XP you needed to gain for each additional level increased by a flat amount — the amount you needed for your previous level, plus a flat modifier. This was exacerbated by a leveling penalty, introduced in Nightmare, in which you lost XP gained towards your next level if your character died. You couldn’t drop a level due to this XP loss, but you could theoretically be 99 percent of the way to Lvl 50 and fall back to 0 percent through repeated deaths. The net result of this was that the amount of time required for each additional level increased sharply, and this became increasingly noticeable as you moved into the later game.
Now for the coup de grace: The game was poorly balanced outside of Normal difficulty. I became a game modder specifically because my Barbarian character with maximum Fire Resist was being one-shotted by mini-bosses with Fire Aura even when he used abilities that temporarily increased his HP. These mini-bosses and bosses could one-shot a character virtually as soon as you saw them. Death meant losing a portion of gold and dropping equipped items. Attempting to retrieve those items (using whatever alternate gear you had access to) was virtually guaranteed to get you killed at least once more because you’d have to drag monsters away from your corpse in order to try and retrieve what you originally had. Mini-bosses could also spawn with these modifiers in critical areas, where it was exceptionally difficult to move them away from a critical spawn point. There was no way to see the exact location of the fire aura on the ground; you knew you’d touched it when you died.
It was cheap. That’s what I called it. I didn’t consider it any kind of legitimate difficulty spike. It just felt like a way for Blizzard to make the game harder by killing players in a manner they couldn’t even fight. I became a modder because I was angry about the way that these imbalances had changed the game. I felt betrayed.
Looking back (and using Scheurle’s article for reference), I’ve realized that I was angry because Diablo II had broken trust with me. Some of these flaws existed in Normal as well, but they weren’t as apparent due to the influence of how other scaling factors impacted the title. Some of the changes between Normal and later difficulties that impacted how poorly the game scaled included the much-slower pace of leveling and the fact that there were no unique items in-game for the Nightmare and Hell difficulty modes. This made it pointless to spend gold on gambling (since gambling, at the time, only produced normal weapons). The slow speed of leveling meant that one of a player’s primary means of gaining power was substantially curtailed. There were also notable power imbalances created by the use of percentages for some metrics (like life steal). In original vanilla D2, life steal was absurdly overpowered — and absolutely essential to surviving the late game. Certain classes were locked into endgame strategies as a result of bad math and poorly balanced game mechanics. It grated on me.
The changes to Diablo II from Normal to later difficulties weren’t just the result of Blizzard trying to be jerks. It’s common for RPGs to have poorly balanced endgames because most people do not play them for long enough to actually experience the endgame. This was a topic of discussion around Skyrim when that game was new, and it explains much of what happened with Diablo II way back then.
I developed the Fusion 2 mod for Diablo II, followed by a much larger overhaul, Cold Fusion. I and a team of three other people — Justin Gash, John Stanford, and Matt Wesson — cumulatively poured in several thousand man-hours of development time into Cold Fusion. I led the effort, which was a core part of my best friend’s senior project in computer science and consumed no small chunk of my own senior year in college. I’m not sure the game files exist on the internet any longer, but you can see the original website archived by the Wayback Machine. Fair warning: I was not a web designer. Still, it gives some idea of the scope of the project, if you’re familiar with Diablo II.
While I don’t expect anyone reading this to have ever played the mod — I never released an LoD-compatible version of the project — it was a pretty major part of my life for the time I worked on it. We overhauled the entire title, tweaking drop rates, fixing bugs, and implementing a new leveling curve, a new difficulty curve, new monsters, and new unique items intended for both Nightmare and Hell difficulty levels. We developed new audio effects, visuals, and skills using pieces of code that developers had left in place in the engine and audio effects another friend created. We pulled certain unique items over from Diablo I (with Diablo I art) and reworked the skill trees to better balance the game. Our goal, in every scenario, was to build a more consistent Diablo II that didn’t just funnel characters into a single endgame build but allowed other skills to compete as well. I was quite proud of the fact that when Lord of Destruction came out, it adjusted Diablo II in some of the same ways we had, and even introduced new spells that were similar to some of the ones we built. I’m absolutely not claiming that Blizzard took inspiration from our work — it was just neat to see that we’d been thinking along the same lines as people at the company.
For example: We implemented a logarithmic curve for CF’s level scaling — one that was designed to allow a player to run the game once at each difficulty level and finish “Hell” near maximum level. Blizzard wanted a game that would require many, many, many runs through maximum difficulty to reward Lvl 99 and used a differently-shaped curve to do it — but they still moved away from the linear curve they used in the early phases of the title when they launched the expansion, Lord of Destruction.
Until now, I never really understood why I was so unhappy with the base game in the first place. Now I do. I felt as though the collective changes to Diablo II that happened after Normal weren’t just the result of making the game harder — they made the game different, in ways that felt like they’d broken the trust Blizzard had established in building the game.
It’s not often that you discover the explanation for why you spent a few thousand hours rebuilding someone else’s project in an article written 18 years after the fact. I suppose Cold Fusion has always felt a bit like a road-not-taken path for me. It had its fans, but it was one reasonably popular mod among many, not a DOTA or a Counter-Strike. Either way, I appreciate Scheurle’s discussion of difficulty and how developers think about the topic. It shed some light on an episode of my own life.
Now Read:
Meet the PiS2: A PS2 Portable Built with a Raspberry Pi 2 Server
World of Warcraft Classic vs. Retail, Part 1: Which Early Game Plays Better?
PC Gamers Who Didn’t Play Classic Console Games Missed Out on Great Experiences
from ExtremeTechExtremeTech https://www.extremetech.com/gaming/299138-how-an-article-on-game-difficulty-explained-my-own-modding-18-years-later from Blogger http://componentplanet.blogspot.com/2020/04/how-article-on-game-difficulty.html
0 notes
componentplanet · 5 years
Text
How an Article on Game Difficulty Explained My Own Modding, 18 Years Later
A game’s difficulty level can make or break the title. Games that are perceived as too difficult become boring, depressing grinds, while games that are too easy become boring and tedious, with little challenge. One of the most profound differences between World of Warcraft Classic and Retail is the difference in difficulty. Of course, every player has their own ideas about how hard a game should be, but there’s no arguing that the difficulty of a title is important.
But according to game developer Jennifer Scheurle, game developers think about game difficulty very differently than players do, which may be part of why conversations on this topic sometimes seem to break down. Her piece resonated with me, partly because it reminded me of the reasons why I became a game modder, once upon a time. According to Scheurle, difficulty is all about trust.
“At the core of the difference between how game designers and players speak about difficulty,” she writes, “is the fact that we discuss it in terms of skill progression. All difficulty design is essentially that: crafting how players will learn, apply skills, and progress through challenges.”
Graphic by Jennifer Scheurle for Polygon
She then walks through examples of how this plays out in games, using the Dark Souls series as an example. DS games ask you to accept that you will die (frequently) as part of learning how encounters function. You aren’t simply being killed by mechanics you can’t master, beat, or counter, you’re learning how the game functions and how to counter incoming attacks. The game, in turn, obeys its own internal rules. Players often become angry at a game if they feel it isn’t holding up its end of the bargain in some particular, whether that refers to drop rates, spawn rates, boss difficulty, or the damage you take versus the damage you deal. She also discusses the importance of how a game teaches players to play it, and the various in-game ways that developers communicate game difficulty and associated rules. It’s a very different view of the topic than simply boiling it down into whether a game is “hard” or “easy,” and it leads to a much more nuanced view of how and why different titles may put difficulty in different places.
The article resonated with me in part because it describes part of why I became a Diablo II modder and taught me something about my own motivation. I don’t want to seem as if I’m hijacking Scheurle’s excellent discussion of game difficulty because it’s worth a read in its own right, but I’m going to switch gears a bit and talk about my own experience. To put it simply: I was pissed.
Diablo II’s Trust Fail
This was the early days of Diablo II, before the Lord of Destruction expansion had even come out. Patch 1.03 dropped not long before I started modding, to put a date on things. On Normal difficulty, Diablo II worked pretty well, but as you progressed into Nightmare and Hell difficulty modes, deficiencies became apparent.
Back then, Diablo II used a linear leveling curve in which the amount of XP you needed to gain for each additional level increased by a flat amount — the amount you needed for your previous level, plus a flat modifier. This was exacerbated by a leveling penalty, introduced in Nightmare, in which you lost XP gained towards your next level if your character died. You couldn’t drop a level due to this XP loss, but you could theoretically be 99 percent of the way to Lvl 50 and fall back to 0 percent at Lvl 49. The net result of this was that the amount of time required for each additional level increased sharply, and this became increasingly noticeable as you moved into the later game.
Now for the coup de grace: The game was poorly balanced outside of Normal difficulty. I became a game modder specifically because my Barbarian character with maximum Fire Resist was being one-shotted by mini-bosses with Fire Aura even when he used abilities that temporarily increased his HP. These mini-bosses and bosses could one-shot a character virtually as soon as you saw them. Death meant losing a portion of gold and dropping equipped items. Attempting to retrieve those items (using whatever alternate gear you had access to) was virtually guaranteed to get you killed at least once more because you’d have to drag monsters away from your corpse in order to try and retrieve what you originally had. Mini-bosses could also spawn with these modifiers in critical areas, where it was exceptionally difficult to move them away from a critical spawn point. There was no way to see the exact location of the fire aura on the ground; you knew you’d touched it when you died.
It was cheap. That’s what I called it. I didn’t consider it any kind of legitimate difficulty spike. It just felt like a way for Blizzard to make the game harder by killing players in a manner they couldn’t even fight. I became a modder because I was angry about the way that these imbalances had changed the game. I felt betrayed.
Looking back (and using Scheurle’s article for reference), I’ve realized that I was angry because Diablo II had broken trust with me. Some of these flaws existed in Normal as well, but they weren’t as apparent due to the influence of how other scaling factors impacted the title. Some of the changes between Normal and later difficulties that impacted how poorly the game scaled included the much-slower pace of leveling and the fact that there were no unique items in-game for the Nightmare and Hell difficulty modes. This made it pointless to spend gold on gambling (since gambling, at the time, only produced normal weapons). The slow speed of leveling meant that one of a player’s primary means of gaining power was substantially curtailed. There were also notable power imbalances created by the use of percentages for some metrics (like life steal). In original vanilla D2, life steal was absurdly overpowered — and absolutely essential to surviving the late game. Certain classes were locked into endgame strategies as a result of bad math and poorly balanced game mechanics. It grated on me.
The changes to Diablo II from Normal to later difficulties weren’t just the result of Blizzard trying to be jerks. It’s common for RPGs to have poorly balanced endgames because most people do not play them for long enough to actually experience the endgame. This was a topic of discussion around Skyrim when that game was new, and it explains much of what happened with Diablo II way back then.
I developed the Fusion 2 mod for Diablo II, followed by a much larger overhaul, Cold Fusion. I and a team of three other people — Justin Gash, John Stanford, and Matt Wesson — cumulatively poured in several thousand man-hours of development time into Cold Fusion. I led the effort, which was a core part of my best friend’s senior project in computer science and consumed no small chunk of my own senior year in college. I’m not sure the game files exist on the internet any longer, but you can see the original website archived by the Wayback Machine. Fair warning: I was not a web designer. Still, it gives some idea of the scope of the project, if you’re familiar with Diablo II.
While I don’t expect anyone reading this to have ever played the mod — I never released an LoD-compatible version of the project — it was a pretty major part of my life for the time I worked on it. We overhauled the entire title, tweaking drop rates, fixing bugs, and implementing a new leveling curve, a new difficulty curve, new monsters, and new unique items intended for both Nightmare and Hell difficulty levels. We developed new audio effects, visuals, and skills using pieces of code that developers had left in place in the engine and audio effects another friend created. We pulled certain unique items over from Diablo I (with Diablo I art) and reworked the skill trees to better balance the game. Our goal, in every scenario, was to build a more consistent Diablo II that didn’t just funnel characters into a single endgame build but allowed other skills to compete as well. I was quite proud of the fact that when Lord of Destruction came out, it adjusted Diablo II in some of the same ways we had, and even introduced new spells that were similar to some of the ones we built. I’m absolutely not claiming that Blizzard took inspiration from our work — it was just neat to see that we’d been thinking along the same lines as people at the company.
For example: We implemented a logarithmic curve for CF’s level scaling — one that was designed to allow a player to run the game once at each difficulty level and finish “Hell” near maximum level. Blizzard wanted a game that would require many, many, many runs through maximum difficulty to reward Lvl 99 and used a differently-shaped curve to do it — but they still moved away from the linear curve they used in the early phases of the title when they launched the expansion, Lord of Destruction.
Until now, I never really understood why I was so unhappy with the base game in the first place. Now I do. I felt as though the collective changes to Diablo II that happened after Normal weren’t just the result of making the game harder — they made the game different, in ways that felt like they’d broken the trust Blizzard had established in building the game.
It’s not often that you discover the explanation for why you spent a few thousand hours rebuilding someone else’s project in an article written 18 years after the fact. I suppose Cold Fusion has always felt a bit like a road-not-taken path for me. It had its fans, but it was one reasonably popular mod among many, not a DOTA or a Counter-Strike. Either way, I appreciate Scheurle’s discussion of difficulty and how developers think about the topic. It shed some light on an episode of my own life.
Now Read:
Meet the PiS2: A PS2 Portable Built with a Raspberry Pi 2 Server
World of Warcraft Classic vs. Retail, Part 1: Which Early Game Plays Better?
PC Gamers Who Didn’t Play Classic Console Games Missed Out on Great Experiences
from ExtremeTechExtremeTech https://www.extremetech.com/gaming/299138-how-an-article-on-game-difficulty-explained-my-own-modding-18-years-later from Blogger http://componentplanet.blogspot.com/2019/09/how-article-on-game-difficulty.html
0 notes