Posts Tagged ‘ ancient ’

history of soap

Myth has it that in 1,000 B.C. soap was discovered on Sappo Hill in Rome by a group of women rinsing their clothes in the river at the base of a hill, below a higher elevation where animal sacrifice had taken place.  They noticed the clothes coming clean as they came in contact with the soapy clay oozing down the hill and into the water. They later discovered that this same cleansing substance was formed when animal fat was soaked down through the wood ashes and into the clay soil.

Factually, we know that soap has been around for about 2,800 years.  The earliest known evidence of soap use are Babylonian clay cylinders dating from 2800 BC containing a soap-like substance. A formula for soap consisting of water, alkali and cassia oil was written on a Babylonian clay tablet around 2200 BC.

The Ebers papyrus (Egypt, 1550 BC) indicates that ancient Egyptians bathed regularly and combined animal and vegetable oils with alkaline salts to create a soap-like substance. Egyptian documents mention that a soap-like substance was used in the preparation of wool for weaving.

According to Pliny the Elder, the Phoenicians prepared it from goat’s tallow and wood ashes in 600 BC and sometimes used it as an article of barter with the Gauls.   The word “soap” appears first in a European language in Pliny the Elder’s Historia Naturalis, which discusses the manufacture of soap from tallow and ashes, but the only use he mentions for it is as a pomade for hair; he mentions rather disapprovingly that among the Gauls and Germans, men are likelier to use it than women

Soap was widely known in the Roman Empire; whether the Romans learned its use and manufacture from ancient Mediterranean peoples or from the Celts, inhabitants of Britannia, is not known.  Early Romans made soaps in the first century A.D. from urine to make a soaplike substance.  The urine contained ammonium carbonate which reacted with the oils and fat in wool for a partial saponification.  People called fullones walked the city streets collecting urine to sell to the soapmakers.

The Celts, who produced their soap from animal fats and plant ashes, named the product saipo, from which the word soap is derived. The importance of soap for washing and cleaning was apparently not recognized until the 2nd century A.D. ; the Greek physician Galen mentions it as a medicament and as a means of cleansing the body. Previously soap had been used as medicine.

The writings attributed to the 8th-century Arab savant Jabir ibn Hayyan (Geber) repeatedly mention soap as a cleansing agent. The Arabs made the soap from vegetable oil as olive oil or some aromatic oils such as thyme oil. Sodium Lye (Al-Soda Al-Kawia) NaOH was used for the first time and the formula hasn’t changed from the current soap sold in the market. From the beginning of the 7th century soap was produced in Nablus (Palestine), Kufa (Iraq) and Basra (Iraq). Arabian Soap was perfumed and colored, some of the soaps were liquid and others were hard. They also had special soap for shaving. It was commercially sold for 3 Dirhams (0.3 Dinars) a piece in 981 AD.

Historically, soap was made by mixing animal fats with lye. Because of the caustic lye, this was a dangerous procedure (perhaps more dangerous than any present-day home activities) which could result in serious chemical burns or even blindness. Before commercially-produced lye was commonplace, it was produced at home for soap making from the ashes of a wood fire.

In Europe, soap production in the Middle Ages centered first at Marseilles, later at Genoa, then at Venice. Although some soap manufacture developed in Germany, the substance was so little used in central Europe that a box of soap presented to the Duchess of Juelich in 1549 caused a sensation. As late as 1672, when a German, A. Leo, sent Lady von Schleinitz a parcel containing soap from Italy, he accompanied it with a detailed description of how to use the mysterious product.

Castile soap, made entirely from olive oil, was produced in the Kingdom of Castile in Europe as early as the 16th century (about 1616).   Fine sifted alkaline ash of the Salsola species of thistle, called barilla, was boiled with locally available olive oil, instead of tallow. By adding salty brine to the boiled liquor, the soap was made to float to the surface, where it could be skimmed off by the soap-boiler, leaving the excess lye and impurities to settle out.  This produced what was probably the first white hard soap, which hardened further as it was aged, without losing its whiteness, forming jabon de Castila, which eventually became the generic name.

The first English soapmakers appeared at the end of the 12th century in Bristol. In the 13th and 14th centuries, a small community of them grew up in the neighborhood of Cheapside in London. In those days soapmakers had to pay a tax on all the soap they produced. After the Napoleonic Wars this tax rose as high as three pence per pound; soap-boiling pans were fitted with lids that could be locked every night by the tax collector in order to prevent production under cover of darkness. Not until 1853 was this high tax finally abolished, at a sacrifice to the state of over £1,000,000. Before this because of the high cost of soap, ordinary households made do without soap until about 1880, when cheap factory-made soap began to flood the market.  Soap came into such common use in the 19th century that Justus von Liebig, a German chemist, declared that the quantity of soap consumed by a nation was an accurate measure of its wealth and civilization.

Soap was certainly known in England in the sixteenth century but as it was made of fat, and fat was needed for making candles and rushlights, it was always a prerogative of the rich.  When soap was used it was primarily used for cleaning linens and clothes rather than the human body.  Since little emphasis was placed on using soap for bodily cleanliness, people (shall we say) had an “air” about them that they tried to overcome by wearing sachets of herbs around their necks or carrying these sachets in their pockets.  When baths were taken, whether soap was used or not, the bath water was traditionally shared among the family members with the small children being bathed last.  The end result was water so dirty and murky, that a small child could literally be lost in the water – hence the saying “Don’t throw the baby out with the bath water”.

     Early soapmakers probably used ashes and animal fats. Simple wood or plant ashes containing potassium carbonate were dispersed in water, and fat was added to the solution. This mixture was then boiled; ashes were added again and again as the water evaporated. During this process a slow chemical splitting of the neutral fat took place; the fatty acids could then react with the alkali carbonates of the plant ash to form soap (this reaction is called saponification).

Animal fats containing a percentage of free fatty acids were used by the Celts. The presence of free fatty acids certainly helped to get the process started. This method probably prevailed until the end of the Middle Ages, when slaked lime came to be used to causticize the alkali carbonate. Through this process, chemically neutral fats could be saponified easily with the caustic lye. The production of soap from a handicraft to an industry was helped by the introduction of the Leblanc process for the production of soda ash from brine (about 1790) and by the work of a French chemist, Michel Eugène Chevreul, who in 1823 showed that the process of saponification is the chemical process of splitting fat into the alkali salt of fatty acids (that is, soap) and glycerin.

     The method of producing soap by boiling with open steam, introduced at the end of the 19th century, was another step toward industrialization.   The industrialization of soap making though tended to use more chemically produced ingredients and less natural ingredients, and produced in essence a detergent rather than a soap such as our ancestors used.

     With World War I and the shortages of fats and oils that occurred, people felt compelled to look for a replacement for soap, leading to the invention of synthetic detergents.  These detergents, while being able to clean our clothes effectively, are comprised of harsh chemicals that clean, scent, and coat our clothes.  Unfortunately, many of these synthetic detergents have found their way into our skin care products.  This has caused in some people super sensitivity to these “soaps”, rashes, skin irritations, and allergies plus a general drying out of the skin. Increasingly, we are required to use hand creams and lotions to prevent or reduce the dryness and roughness arising from exposure to household detergents, wind, sun, and dry atmospheres. Like facial creams, they act largely by replacing lost water and laying down an oil film to reduce subsequent moisture loss while the body’s natural processes repair the damage.

     In modern times, the use of soap has become universal in industrialized nations due to a better understanding of the role of hygiene in reducing the population size of pathogenic microorganisms. Manufactured bar soaps first became available in the late nineteenth century, and advertising campaigns in Europe and the United States helped to increase popular awareness of the relationship between cleanliness and health. By the 1950s, soap had gained public acceptance as an instrument of personal hygiene.

     In recent years, there has been a grassroots return to making “natural” soap in the home.  These cottage industries make soap from ingredients found in nature for its skin care qualities rather than a synthetic soap which relies upon laboratory-made chemicals to make the soap look and feel and act in a certain way.  It is tempting for soap manufacturers to lean toward synthetics and away from natural materials. Synthetics are more stable in more situations and less expensive in the long run unlike the fats and oils which differ slightly from tree to tree and region to region.

     As Susan Miller Cavitch states in her book The Natural Soap Book: Making Herbal and Vegetable Based Soaps,

“As we become more and more comfortable with synthetics in all areas of our lives, we run the risk of losing natural defenses and continually needed greater synthetic intervention.  Skin care is but one facet of this phenomenon.  Our skin is remarkably capable of functioning on its own to protect us, but, as we use more and more harsh, foreign substances, we alter the body’s chemical makeup and leave our skin without its natural defenses.  We risk becoming dependent on stronger and stronger synthetics to take the place of the body’s natural systems.  We must each, as individuals, decide which route to go – the way of nature or the way of the lab.”

Some individuals have chosen not to use the commercial “soaps” and continue to make soap in the home. The traditional name “soaper”, for a soapmaker, is still used by those who make soap as a hobby. Those who make their own soaps are also known as soapcrafters.  Many of these soapcrafters have expanded their soap making from a hobby basis to a business basis to make natural soap more available to the public at large.  Many come up with their own recipes using different butters and essential oils to help those with sensitive skin or who just want to pamper their skin so that it retains its elasticity, moisture, and smoothness.

The most popular soap making processes today is the cold process method, where fats such as olive oil react with lye. Soapmakers sometimes use the melt and pour process, where a premade soap base is melted and poured in individual molds. Some soapers also practice other processes, such as the historical hot process, and make special soaps such as clear soap (aka glycerin soap).

Handmade soap differs from industrial soap in that, usually, an excess of fat is used to consume the alkali (superfatting), and in that the glycerin is not removed. Superfatted soap, soap which contains excess fat, is more skin-friendly than industrial soap; though, if not properly formulated, it can leave users with a “greasy” feel to their skin. Often, emollients such as jojoba oil or shea butter are added ‘at trace’ (the point at which the saponification process is sufficiently advanced that the soap has begun to thicken), after most of the oils have saponified, so that they remain unreacted in the finished soap.

     Natural soapcrafters today have many different ingredients to select from to produce wonderful and varied soap bars.  These ingredients consist of:

  • base oils available in today’s market such as coconut oil, jojoba oil, avocado oil, castor oil, cottonseed oil, olive oil, palm oil, palm kernel oil, peanut oil and soybean oil
  • various butters like shea butter, mango butter, and cocoa butter for extra moisturizing capabilities
  • other nutrients such as sweet almond oil, avocado oil, aloe vera, calendula oil, carrot root oil, various clays, and seaweed
  • essential oils including peppermint, eucalyptus, spearmint, chamomile, geranium, rosemary, lavender, etc for scenting and therapeutic effects
  • and various herbs and spices for color

Soapmakers today can produce artistic therapeutic soap bars high in moisturizers for the discerning soap shopper.

history of the hair comb

Combs have been on the scene ever since humans had hair on his head. Which is quite sometime? The date perhaps goes beyond the time of the Old Stone Age. Man being man and not a lion would not be content to let his mane run wild and free. So he had to find some ways to tame it. First on the list of combing operations must have been the use of fingers. So in a way the fingers are the first combs of history.

A comb is a solid tool, usually flat and always with teeth. It is used for caring for human hair and cleaning other fluffy stuff like fiber. The etymology goes back to ancient Greece and Sanskrit meaning tooth or to bite. Among tools perhaps it is the oldest. Exquisite combs have been found digging up the ancient Persian Empire going back about 5000 years and at the time of the first Indo-European migrations. Many of the historical combs can be seen in museums. In the Hermitage Museum there is an exquisitely carved comb belonging to the Scythian period cca. 400 BC termed the Salokha comb. On the head are depicted three human figures, one being on horseback, about to kill an animal.
Combs were not always used for cosmetic purpose. It was used to comb out hair parasites like lice that took shelter in human hair. The fact is that as yet, no traditional civilization has yet been found that did not use combs! If you share combs then you have to share parasites also. Parasites love traveling from scalp to scalp via the comb route. Parasites travel in groups with families and eggs. Thus a comb is extremely popular with lice, fleas, mites and fungus. Sometimes the matter becomes serious because the comb is said to have been a carrier for the Black Plague, that finished off nearly one third of Europe in the Middle Ages. There are special nit combs and flea combs to tackle the menace of macroscopic vermin.
The comb may be turned into a musical instrument by stringing across its teeth the leaf of a plant or a thin piece of paper. Humming on it with cropped lips produce a heavenly ethereal sound. This principle is used in a musical instrument called the kazoo. The shape, material and length of the teeth determine the harmonic qualities of the comb.
Police investigators love combs. This is the first item they will seek in the crime scene. From the comb they will carefully collect samples of hair and dandruff for clues. The latest DNA testing procedure makes the hair on the comb an important item for proving or disproving accusations.
Some experts on hair care are of the opinion at it is best to use combs with wide teeth instead of hairbrushes and plastic combs having fine teeth. Wooden combs are supposed to anti-static without sharp seams. This prevents snapping and tangling of hair. The hairbrush continues to be popular. It is bigger than the usual comb and is used for managing and styling hair.
Combs have been frequently mentioned in many religious books. Among Hindus, during the period of mourning the family is not supposed to brush, comb or oil the hair. For some groups this continues for a fortnight. After the last rites the men shave off their hair while the women get back to the earnest job of combing the tangled mass. Indian mendicants take the vow of not combing their matted locks. It dangles in knotty splendour and revered by all. In mythology the River Ganga splashed on to the matted locks of Lord Shiva to find shelter and support.
Combs are universal and no corner of the globe is without it. But each has its own style and use of special material. Wooden combs are still quite common in village fairs in Asia. Usually these are made of boxwood and wood of cherry and pine trees. The best wooden combs are made by hand and polished. Some combs are made from the horns of buffalos. The early ones were made from ivory and bones. Silver, gold, tin and brass were also used. Tortoise shell and horn combs were more pliable and soft than other ones as these could be easily moulded. Generally combs are shaped from the raw material specific to the locality. It has its down side. The Chinese Kingfishers exquisite turquoise feathers were used to make classy combs and led to the near extinction of the species. The collector of African combs will be able to identify the locale from the wood used in each specific comb.
When the sentiments for ivory was getting too strong and supply becoming low, two brothers, Isaiah and John Hyatt in 1869 after playing around in the laboratory for some time discovered celluloid.  The first plastic consisted of nitrocellulose and camphor. A revolution was kicked off in the world of combs. They became cheaper and faster to make while keeping up the appearances of ivory and tortoise shell. It meant good news to the animals that got a breather to comb the nature reserves without fear.
In USA one Enoch Noyes opened a mini shop selling combs made from cattle horns. A German named Cleland joined him with technical know how and tools. Within few years a number of skilled horn smiths found employment under them. Leominster in Massachusetts came to be known since then as the Comb Capital of the country.
Combs are no longer the prerogative of humans. All one has to do is to pay a visit to the pet shop. There are various types of combs and brushes for cats, dogs and horses. There are different varieties specific to each family of dogs and cats.
Combing has an acupuncture effect. The nerves get stimulated. Holistic medicine practitioners strongly advise the use of combs to get over a feeling of depression. To come back to a feeling of well being with a bounce just vigorously comb your hair. For the best effect keep changing the comb so that the teeth are sharp and pointed for the required results. To avoid infection it is best to observe comb hygiene. Like the toothbrush there should be a separate comb for each individual.
Collectors can share interesting experiences. One collected a delicate honey-amber coloured piece with 21 teeth still intact, from an African flea market! A woman’s crowning glory is her cascade of long tresses. The comb not only smoothes it out but also can be used to keep it in place. Some were carved and decorated with rhinestones. In yesteryears men fell in love with the rippling long tresses of women. Another big sized comb with its back broken off had its top made of rhinestone. It came from a church white elephant sale.

Research in combs is still relatively new but efforts are on to rope in enthusiasts and scholars to find out more about the oldest tool in the history of mankind. This led to the formation of a club in 1993, The Antique Comb Collectors Club. It is a non-profit organization intense in its search to comb the past for information and antique pieces.

history of scissors

Scissors are hand-operated cutting instruments. They consist of a pair of metal blades pivoted so that the sharpened edges slide against each other when the handles (bows) opposite to the pivot are closed. Scissors are used for cutting various thin materials, such as paper, cardboard, metal foil, thin plastic, cloth, rope and wire. Scissors can also be used to cut hair and food. Scissors and shears are functionally equivalent, but larger implements tend to be called shears.

There are many types of scissors and shears for different purposes. For example, children’s scissors, used only on paper, have dull blades and rounded corners to ensure safety. Scissors used to cut hair or fabric must be much sharper. The largest shears used to cut metal or to trim shrubs must have very strong, sharp blades.

Specialized scissors include sewing scissors, which often have one sharp point and one blunt point for intricate cutting of fabric, and nail scissors, which sometimes have curved blades for cutting fingernails and toenails.

Special kinds of shears include pinking shears, which have notched blades that cut cloth to give it a wavy edge, and thinning shears, which have teeth that cut every second hair strand, rather than every strand giving the illusion of thinner hair.

The noun “scissors” is treated as a plural noun, and therefore takes a plural verb (“these scissors are”). Alternatively, this tool is also referred to as “a pair of scissors”, in which case it (a pair) is singular and therefore takes a singular verb (“this pair of scissors is”).

The word shears is used to describe similar instruments that are larger in size and for heavier cutting. Geographical opinions vary as to the size at which ‘scissors’ become ‘shears’, but this is often at between six to eight inches in length.

It is most likely that scissors were invented around 1500 BC in ancient Egypt.The earliest known scissors appeared in Mesopotamia 3,000 to 4,000 years ago. These were of the ‘spring scissor’ type comprising two bronze blades connected at the handles by a thin, flexible strip of curved bronze which served to hold the blades in alignment, to allow them to be squeezed together, and to pull them apart when released.

Spring scissors continued to be used in Europe until the sixteenth century. However, pivoted scissors of bronze or iron, in which the blades were pivoted at a point between the tips and the handles, the direct ancestor of modern scissors, were invented by the Romans around AD 100. They entered common use not only in ancient Rome, but also in China, Japan, and Korea, and the idea is still used in almost all modern scissors.

During the Middle Ages and Renaissance, spring scissors were made by heating a bar of iron or steel, then flattening and shaping its ends into blades on an anvil. The center of the bar was heated, bent to form the spring, then cooled and reheated to make it flexible.

William Whiteley & Sons (Sheffield) Ltd. is officially recognized as first starting the manufacture of scissors in the year 1760, although it is believed the business began trading even earlier. The first trade-mark, 332, was granted in 1791.

Pivoted scissors were not manufactured in large numbers until 1761, when Robert Hinchliffe produced the first pair of modern-day scissors made of hardened and polished cast steel. He lived in Cheney Square, London and was reputed to be the first person who put out a signboard proclaiming himself “fine scissor manufacturer”.

During the nineteenth century, scissors were hand-forged with elaborately decorated handles. They were made by hammering steel on indented surfaces known as bosses to form the blades. The rings in the handles, known as bows, were made by punching a hole in the steel and enlarging it with the pointed end of an anvil.

In 1649, in a part of Sweden that is now in Finland, an ironworks was founded in the “Fiskars” hamlet between Helsinki and Turku. In 1830, a new owner started the first cutlery works in Finland, making, among other items, scissors with the Fiskars trademark. In 1967, Fiskars Corporation introduced new methods to scissors manufacturing.

A pair of scissors consists of two pivoted blades. In lower quality scissors the cutting edges are not particularly sharp; it is primarily the shearing action between the two blades that cuts the material. In high quality scissors the blades can be both extremely sharp, and tension sprung – to increase the cutting and shearing tension only at the exact point where the blades meet. The hand movement (pushing with the thumb, pulling with the fingers in right handed use) can add to this tension. An ideal example is in high quality tailors scissors or shears, which need to be able perfectly cut (and not simply tear apart) delicate cloths such as chiffon and silk.

Children’s scissors are usually not particularly sharp, and the tips of the blades are often blunted or ’rounded’ for safety.

Mechanically, scissors are a first-class double-lever with the pivot acting as the fulcrum. For cutting thick or heavy material, the mechanical advantage of a lever can be exploited by placing the material to be cut as close to the fulcrum as possible. For example, if the applied force (i.e., the hand) is twice as far away from the fulcrum as the cutting location (e.g., piece of paper), the force at the cutting location is twice that of the applied force at the handles. Scissors cut material by applying a local shear stress at the cutting location which exceeds the material’s shear strength.

Specialized scissors, such as bolt cutters, exploit leverage by having a long handle but placing the material to be cut close to the fulcrum.

For people who do not have the use of their hands, there are specially designed foot operated scissors. Some quadriplegics can use a motorized mouth-operated style of scissor.

Kitchen scissors, also known as kitchen shears, are traditionally used in the kitchen for food preparation, although due to their tough nature they can serve many other purposes. In modern times they are often made from stainless steel (for food hygiene and oxidization-resistance reasons). They often have kitchen functionality (other than cutting) incorporated, such as bottle-cap and bottle-openers built into the handles.

Most scissors are best-suited for use with the right hand, but left-handed scissors are designed for use with the left hand. Because scissors have overlapping blades, they are not symmetric. This asymmetry is true regardless of the orientation and shape of the handles: the blade that is on top always forms the same diagonal regardless of orientation. Human hands are also asymmetric, and when closing, the thumb and fingers do not close vertically, but have a lateral component to the motion. Specifically, the thumb pushes out and fingers pull inwards. For right-handed scissors held in the right hand, the thumb blade is further from the user’s body, so that the natural tendency of the right hand is to force the cutting blades together. Conversely, if right-handed scissors are held in the left hand, the natural tendency of the left hand would be to force the cutting blades laterally apart. Furthermore, with right-handed scissors held by the right-hand, the shearing edge is visible, but when used with the left hand the cutting edge of the scissors is behind the top blade, and one cannot see what is being cut.

Some scissors are marketed as ambidextrous. These have symmetric handles so there is no distinction between the thumb and finger handles, and have very strong pivots so that the blades simply rotate and do not have any lateral give. However, most “ambidextrous” scissors are in fact still right-handed in that the upper blade is on the right, and hence is on the outside when held in the right hand. Even if they successfully cut, the blade orientation will block the view of the cutting line for a left-handed person. True ambidextrous scissors are possible if the blades are double-edged and one handle is swung all the way around (to almost 360 degrees) so that the back of the blades become the new cutting edges. Patents have been awarded for true ambidextrous scissors.

history of clothes iron

    

Thehistory of clothes iron can be traced back to as early as the 400 B.C. The Greeks were considered to be the first to use a roller iron to create pleats on linen robes. Even the Romans used a hand mangle which is quite similar to the modern day iron to beat the clothes. Through the beating and hitting, wrinkles are removed from the clothes. Then, there was prelum which was made of wood that looked quite similar to a winepress. Thus, the Romans had many tools that were used for pressing the fabric to get rid of the wrinkles.

In the first century B.C. the Chinese used a metal pan filled with burning charcoal to press the clothes. Later on, the 17th Century was recognized as the era when cloth iron came in to the picture. This was when the sad irons or sadirons appeared. In ancient times, the word sad referred to something solid. And this is from where it got its name as sad iron. It was also known as flat irons, as it had a thick metal slab with a handle. Sometime later this model was perfected to a metal box which could hold hot coals.

Flattish hand-size stones could be rubbed over woven cloth to smooth it, polish it, or to press in pleated folds. Simple round linen smoothers made of dark glass have been found in many Viking women’s graves, and are believed to have been used with smoothing boards. Archaeologists know there were plenty of these across medieval Europe, but they aren’t completely sure how they were used. Water may have been used to dampen linen, but it is unlikely the smoothers were heated.

More recent glass smoothers often had handles, like these from Wales, or the English one in the picture (left). They were also called slickers, slickstones, sleekstones, or slickenstones. Decorative 18th and 19th century glass smoothers in “inverted mushroom” shape may turn up at antiques auctions. Occasionally they are made of marble or hard wood.

Slickstones were standard pieces of laundering equipment in the late Middle Ages, in England and elsewhere, and went on being used up to the 19th century, long after the introduction of metal irons. They were convenient for small jobs when you didn’t want to heat up irons, lay out ironing blankets on boards, and so on.

Other methods were available to the rich. Medieval launderers preparing big sheets, tablecloths etc. for a large household may have used frames to stretch damp cloth smooth, or passed it between “calenders” (rollers). They could also flatten and smooth linen in screw-presses of the kind known in Europe since the Romans had used them for smoothing cloth. Later presses (see right) sometimes doubled as storage furniture, with linen left folded flat under the board after pressing even when there were no drawers.

Even in modest homes with no presses, large items needed to be tackled with something bigger than a slickstone. They could be smoothed with a mangle board and rolling pincombination; many wonderfully carved antique Scandinavian or Dutch mangle boards have been preserved by collectors. The board, often carved by a young man for his bride-to-be, was pressed back and forth across cloth wound on the roller.

In England boards, paddles or bats like these were called battledores, battels, beatels, beetles, or other “beating” names. In Yorkshire a bittle and pin was used in the same way as the Scandinavian mangle board and roller. The earlier mechanical mangles copied this method of pressing a flat surface across rollers. The box mangle was a heavy box weighted with stones functioning as the “mangle board”, with linen wound on cylinders underneath, or spread under the rollers. The boards/bats used for smoothing were similar to wooden implements used in washing: washing beetles used to beat clothes clean, perhaps in a stream. Sometimes they were cylindrical like the mangle rollers, sometimes flat. Instead of pressing you could simply whack your household linen with a bat/paddle against a flat surface, as witnessed in the Scottish Borders in 1803 by Dorothy Wordsworth.

Early box mangles (see left-hand column), like Baker’s Patent Mangle, were devised for pressing and smoothing. Mangles with two rollers (above left) could also be used for wringing water out of fabric. Many Victorian households would complete the “ironing” of sheets and table-linen with a mangle, using hot irons just for clothing. In the UK laundry could be sent for smoothing to a mangle-woman, working at home, often a widow earning pennies with a mangle bought by well-wishers after her husband’s death. In the late 19th/early 20th century US commercial laundries described the mangling or pressing of large items as “flatwork” to distinguish it from the detailed ironing given to shaped clothing.
Blacksmiths started forging simple flat irons in the late Middle Ages. Plain metal irons were heated by a fire or on a stove. Some were made of stone, like these soapstone irons from Italy. Earthenware and terracotta were also used, from the Middle East to Franceand the Netherlands.

Flat irons were also called sad irons or smoothing irons. Metal handles had to be gripped in a pad or thick rag. Some irons had cool wooden handles and in 1870 a detachable handle was patented in the US. This stayed cool while the metal bases were heated and the idea was widely imitated. (See these irons from Central Europe.) The sad in sad iron (or sadiron) is an old word for solid, and in some contexts this name suggests something bigger and heavier than a flat iron. Goose or tailor’s goose was another iron name, and this came from the goose-neck curve in some handles. In Scotland people spoke of gusing (goosing) irons.

You’d need at least two irons on the go together for an effective system: one in use, and one re-heating. Large households with servants had a special ironing-stove for this purpose. Some were fitted with slots for several irons, and a water-jug on top.

At home, ironing traditional fabrics without the benefit of electricity was a hot, arduous job. Irons had to be kept immaculately clean, sand-papered and polished. They must be kept away from burning fuel, and be regularly but lightly greased to avoid rusting. Beeswax prevented irons sticking to starched cloth. Constant care was needed over temperature. Experience would help decide when the iron was hot enough, but not so hot that it would scorch the cloth. A well-known test was spitting on the hot metal, but Charles Dickens describes someone with a more genteel technique in The Old Curiosity Shop. She held “the iron at an alarmingly short distance from her cheek, to test its temperature…”

The same straightforward “press with hot metal” technique can be seen in Egypt where a few traditional “ironing men” (makwagi) still use long, heavy pieces of iron, pressed across the cloth with their feet. Berber people in Algeria traditionally use heated metal ovals on long handles, called fers kabyles (Kabyle irons) in France, where they were adopted for intricate ironing tasks.

If you make the base of your iron into a container you can put glowing coals inside it and keep it hot a bit longer. This is a charcoal iron, and the photograph (right) shows one being used in India, where it’s not unusual to have your ironing done by a “press wallah” at a stall with a brazier nearby. Notice the hinged lid and the air holes to allow the charcoal to keep smouldering. These are sometimes called ironing boxes, or charcoal box irons, and may come with their own stand.

For centuries charcoal irons have been used in many different countries. When they have a funnel to keep smokey smells away from the cloth, they may be called chimney irons. Antique charcoal irons are attractive to many collectors, while modern charcoal irons are manufactured in Asia and also used in much of Africa. Some of these are sold to Westerners as reproductions or replica “antiques”.

Some irons were shallower boxes and had fitted “slugs” or “heaters” – slabs of metal – which were heated in the fire and inserted into the base instead of charcoal. It was easier to keep the ironing surface spotlessly clean, away from the fuel, than with flatirons or charcoal irons. Brick inserts could be used for a longer-lasting, less intense heat. These are box or slug irons, once known as ironing boxes too. In some countries they are called ox-tongue irons after a particular shape of insert.

Late 19th century iron designs experimented with heat-retaining fillings. Designs of this period became more and more ingenious and complicated, with reversible bases, gas jets and other innovations. See some inventive US models here. By 1900 there were electric irons in use on both sides of the Atlantic.

Ironing continued to be done with hot coals in open metal pans in China, the basic principles no different from an enclosed charcoal iron. Pan irons could be simple or highly decorative. Further west, clay smoothers were sometimes used. Solid ones could be heated for pressing. Others were designed to hold hot embers like the North African terracotta iron on this page. The ladies preparing newly-woven silk in a 12th century Chinese painting are using a pan iron, in the same way as the ironers in the 19th century drawing at the top of this page. Although that drawing comes from Korea, Koreans were traditionally known for smoothing their clothes with pairs of ironing sticks, beating cloth rhythmically on a stone support. A single club for beating clothes smooth was used in Japan, on a stand called a kinuta. In many parts of the world similar techniques were used in both cloth manufacturing and laundering: in Senegal, for example.

The first electric iron is believed to have been invented by Henry W. Seely in the year 1882 and was formerly known as electric flat iron. This iron was heated by a detachable wire. The iron took longer to heat up and cooled down faster while using it. By the year 1892, brands like Crompton and Co. and General Electrics invented handheld electric resistant irons. As the years passed by the technology of iron also advanced.

There were many inventors such as Earl Richardson and Joseph Meyers, who contributed towards the improvement of electric iron. Finally in the year 1926, steam iron came in to existence. The first company to introduce a steam iron is the Eldec Company. Steam irons made it easier to flatten and smoothen wrinkled fabrics. Though, the steam iron came into existence quite early, it became popular only in the 1940’s. To reach such a stage the electric have had to travel a long way leaving behind a remarkable history.

history of the umbrella

It is believed that the first umbrellas were made of silk and that they originated in China more than two thousand years ago. They may even be older than that (at least as parasols) as there is evidence of their presence in the art and artifacts of ancient Egypt, Assyria and Greece. Artistic depictions at Nineveh reveal that the umbrella was generally carried over the king, but it is always shown open. It was often edged with tassels and adorned by a flower or some other ornament at its top. On several bas-reliefs at Persepolis, the king is represented under an umbrella, which a female slave holds over his head. Their primary purpose (both slaves and parasols) was to provide shade from the sun. The Chinese were the first to waterproof their “parasols” in order to use them as protection against the rain.

The ancient Greeks and Romans regarded the umbrella (skiadeion, a word meaning “day shade”) as an item of luxury. It was carried over the head of the effigy of Bacchus, and Athenian daughters were required to bear parasols over the heads of maidens at the festival of the Panathenea. At the British Museum, Hamilton vases bear the image of a princess holding a parasol. In Rome, when the veil could not be spread over the roof of a theater, it was customary for both women and effeminate men to defend themselves against the sun with the umbrella of the period known as an umbraculum. They were made either of skin or leather and could be raised or lowered as circumstances might require. (Perhaps it was a way to avoid the direct viewing of Christians being eaten by lions or maybe, after that happened a few times too many, the other way around.)

Although the practice of using an umbrella in Renaissance Italy was probably a vestige of the Roman influence, as late as 1608 Thomas Coryat speaks of the invention after the description of Italian fans. “Many…do carry other fine things, of a far greater price…which they commonly call umbrellaces; that is, things that minister shadow unto them, for shelter against the scorching heat of the sun. These are made of leather, sometimes answerable to the form of a little canopy, and hooped in the inside with…wooden hoopes, that extend the umbrella into a pretty large compasse. They are used especially by horsemen, who carry them in their hands when they ride, fastening the end of the handle upon one of their thighs; and they impart so long a shadow unto them, for shelter of the sun from the upper part of their bodies.”

It is possible that umbrellas existed at the very same time in Spain and Portugal, from where they spread to the New World. Daniel De Foe makes mention of an umbrella in Robinson Crusoe. Without his faithful friend, Friday, Crusoe describes umbrellas that he has seen in the Brazils, and he constructs one of his own in imitation of them. Subsequently, one type of very heavy umbrella became known as “The Robinson.”

Crusoe goes on to say: “I covered it with skins, the hair outward, so that it cast off the rain like a penthouse, and off the sun so effectually that I could walk out in the hottest of weather with greater advantage than I could before in the coolest.”

The word “umbrella” comes from the Latin word “ombra,” meaning shade or shadow. Sixteenth century Europe, particularly the rainy northern regions, saw the introduction of the umbrella primarily as an accessory for women. The literature of the time indicates that the exteriors of umbrellas were composed entirely of feathers, in imitation of the plumage of water birds. (Afterwards, oiled silk was commonly used.)

The future of the umbrella as strictly a female thing all changed when Jonas Hanway (1712-86) came upon the English scene. The Persian writer and traveler carried and used an umbrella publicly in England for thirty years. His claims of being in delicate health seemed to justify his crossing of the barrier. He popularized its use among men (who before that got very wet, even though they remained in vogue whenever it rained). Before his time, only those men known as “Macaronies” dared to carry an umbrella (before going into evolution to become noodles popularly used with most cheeses). For years, English gentlemen referred to their umbrellas as “Hanways.”

Resistance to the umbrella was not only a matter of sexual preference, but economics as well. Many coachmen regarded rainy weather as something designed to their advantage and from which the public was entitled to no other protection than what their vehicles could offer. One John MacDonald, a footman who wrote a memoir dated about 1790, claimed that upon appearing with a fine silk umbrella which he had brought from Spain, he was saluted with the cry of “Frenchman, why don’t you get a coach?” There was a kind of transition period shortly after this time, during which umbrellas were kept at coffeehouses, liable to be used by gentlemen on special occasions (wet ones, no doubt) under cover of darkness (and possibly masks). It was still, however, stubbornly considered an effeminate accessory.

Early English umbrellas were made of oiled silk and when wet, were particularly difficult to open or close. They were very expensive, heavy and inconvenient until silk and gingham replaced oiled silk. The umbrellas of this time had a ring at the top by which they were usually carried on the finger when unopened and by which, when not in use, they could be hung on the back of a door. A wooden handle came to a rounded point to rest on the ground. These umbrellas were very popular with older women up until around 1810.

The first umbrella shop of record, which opened in 1830, is still today at its original address, 53 New Oxford Street in London. James Smith and Sons sold umbrellas that were works of art; many made of wood and whalebone and covered with alpaca or an oiled canvas. Artisans were employed and paid handsomely to design decorative, curved handles out of hard and precious woods like ebony.

In 1852, Samuel Fox invented the steel ribbed umbrella design, claiming it to be most practical as it was a way to use up excess stocks of farthingale stays, which were used in women’s corsets. Fox also founded the English Steel Company. In 1885, African-American inventor, William C. Carter, patented the very first umbrella stand.

The parasol is most associated with Victorian society. Its popularity may well be ascribed to the Victorian woman’s obsession with maintaining a fair complexion. More than a trademark of beauty, pale skin was a reflection of class, indicating to the world that the woman did not have to work outdoors and was a lady of refinement. Parasols were as much a part of a lady’s wardrobe as her gloves, shoes, hats, fans and stockings. Each outfit a fashionable woman owned had its very own accompanying parasol. They were also popular gifts, and, like the fan and lacy handkerchief, parasols were flirting aids with their own secret language. They were very popular well into the Edwardian era of the early 1900s.

Out of vogue for almost a century, the parasol returned to the fashion scene around 1990, making a comeback like an aging but still beautiful movie queen. This was due largely to an increased awareness of skin cancer and the fact that it was no longer considered healthy or wise to remain in the sun for too long. Parasols are seen with more and more regularity in the streets of Great Britain, France and especially Japan. New materials are being employed that have ultra-violet protection and filter out 97% of dangerous ultra-violet rays.

The most beautiful parasols in the world come from the land of their origin, China. Here they have taken on their own unique persona, even becoming common paraphernalia for artists of the stage. High wire performers use parasols to balance themselves on the high wires. They are made from a variety of materials, (umbrellas, not wirewalkers) including oilpaper, cotton, silk, plastic film and nylon. The best oilpaper umbrellas are thought to be those from Fujan and Hunan provinces. Their bamboo frames are specially treated against mould and worms. The paper covers are hand-painted with flowers, birds, figures and landscapes and then coated with oil so that they are not only practical but pretty and durable as well. They may be used either in rain or sunshine.

The prettiest Chinese umbrellas are those covered with silk, and the silk parasols of Hangzhou are both practical and breathtaking works of art. The very thin silk is printed with landscapes and fixed onto a bamboo frame. Usually weighing a little over eight ounces and about twenty inches long, they are popular gifts for tourists as well. Local girls carry them as part of their everyday attire for protection against the hot and unforgiving sun.

The next time it rains, think about all the years gone by when people got, among other things, very wet. It will endear you to every umbrella you have ever owned, as we all tend to take familiar things for granted. So say hello and thank you to your practical accessory as you raise it to venture out in the wet and the cold. Think about Gene Kelly and his incomparable solo, which he did with an umbrella too (not to mention golden dancing feet and a brilliant choreographic score). The image will not get you wet, for your head is properly covered, but if all goes well, as you walk down the street, it’s sure to make you smile.

history of glass

The discovery of glass
Natural glass has existed since the beginnings of time, formed when certain types of rocks melt as a result of high-temperature phenomena such as volcanic eruptions, lightning strikes or the impact of meteorites, and then cool and solidify rapidly. Stone-age man is believed to have used cutting tools made of obsidian (a natural glass of volcanic origin also known as hyalopsite, Iceland agate, or mountain mahogany) and tektites (naturally-formed glasses of extraterrestrial or other origin, also referred to as obsidianites).

According to the ancient-Roman historian Pliny (AD 23-79), Phoenician merchants transporting stone actually discovered glass (or rather became aware of its existence accidentally) in the region of Syria around 5000 BC. Pliny tells how the merchants, after landing, rested cooking pots on blocks of nitrate placed by their fire. With the intense heat of the fire, the blocks eventually melted and mixed with the sand of the beach to form an opaque liquid.

This brief history looks, however, at the origins and evolution of man-made glass.

A craft is born
The earliest man-made glass objects, mainly non-transparent glass beads, are thought to date back to around 3500 BC, with finds in Egypt and Eastern Mesopotamia. In the third millennium, in central Mesopotamia, the basic raw materials of glass were being used principally to produce glazes on pots and vases. The discovery may have been coincidental, with calciferous sand finding its way into an overheated kiln and combining with soda to form a coloured glaze on the ceramics. It was then, above all, Phoenician merchants and sailors who spread this new art along the coasts of the Mediterranean.

The oldest fragments of glass vases (evidence of the origins of the hollow glass industry), however, date back to the 16th century BC and were found in Mesopotamia. Hollow glass production was also evolving around this time in Egypt, and there is evidence of other ancient glassmaking activities emerging independently in Mycenae (Greece), China and North Tyrol.

Early hollow glass production
After 1500 BC, Egyptian craftsmen are known to have begun developing a method for producing glass pots by dipping a core mould of compacted sand into molten glass and then turning the mould so that molten glass adhered to it. While still soft, the glass-covered mould could then be rolled on a slab of stone in order to smooth or decorate it. The earliest examples of Egyptian glassware are three vases bearing the name of the Pharaoh Thoutmosis III (1504-1450 BC), who brought glassmakers to Egypt as prisoners following a successful military campaign in Asia.

There is little evidence of further evolution until the 9th century BC, when glassmaking revived in Mesopotamia. Over the following 500 years, glass production centred on Alessandria, from where it is thought to have spread to Italy.

The first glassmaking “manual” dates back to around 650 BC. Instructions on how to make glass are contained in tablets from the library of the Assyrian king Ashurbanipal (669-626 BC).

Starting to blow
A major breakthrough in glassmaking was the discovery of glassblowing some time between 27 BC and AD 14, attributed to Syrian craftsmen from the Sidon-Babylon area. The long thin metal tube used in the blowing process has changed very little since then. In the last century BC, the ancient Romans then began blowing glass inside moulds, greatly increasing the variety of shapes possible for hollow glass items.

The Roman connection
The Romans also did much to spread glassmaking technology. With its conquests, trade relations, road building, and effective political and economical administration, the Roman Empire created the conditions for the flourishing of glassworks across western Europe and the Mediterranean. During the reign of the emperor Augustus, glass objects began to appear throughout Italy, in France, Germany and Switzerland. Roman glass has even been found as far afield as China, shipped there along the silk routes.

It was the Romans who began to use glass for architectural purposes, with the discovery of clear glass (through the introduction of manganese oxide) in Alexandria around AD 100. Cast glass windows, albeit with poor optical qualities, thus began to appear in the most important buildings in Rome and the most luxurious villas of Herculaneum and Pompeii.

With the geographical division of the empires, glass craftsmen began to migrate less, and eastern and western glassware gradually acquired more distinct characteristics. Alexandria remained the most important glassmaking area in the East, producing luxury glass items mainly for export. The world famous Portland Vase is perhaps the finest known example of Alexandrian skills. In Rome’s Western empire, the city of Köln in the Rhineland developed as the hub of the glassmaking industry, adopting, however, mainly eastern techniques. Then, the decline of the Roman Empire and culture slowed progress in the field of glassmaking techniques, particularly through the 5th century. Germanic glassware became less ornate, with craftsmen abandoning or not developing the decorating skills they had acquired.

The early Middle Ages
Archaeological excavations on the island of Torcello near Venice, Italy, have unearthed objects from the late 7th and early 8th centuries which bear witness to the transition from ancient to early Middle Ages production of glass.

Towards the year 1000, a significant change in European glassmaking techniques took place. Given the difficulties in importing raw materials, soda glass was gradually replaced by glass made using the potash obtained from the burning of trees. At this point, glass made north of the Alps began to differ from glass made in the Mediterranean area, with Italy, for example, sticking to soda ash as its dominant raw material.

Sheet glass skills
The 11th century also saw the development by German glass craftsmen of a technique – then further developed by Venetian craftsmen in the 13th century – for the production of glass sheets. By blowing a hollow glass sphere and swinging it vertically, gravity would pull the glass into a cylindrical “pod” measuring as much as 3 metres long, with a width of up to 45 cm. While still hot, the ends of the pod were cut off and the resulting cylinder cut lengthways and laid flat. Other types of sheet glass included crown glass (also known as “bullions”), relatively common across western Europe. With this technique, a glass ball was blown and then opened outwards on the opposite side to the pipe. Spinning the semi-molten ball then caused it to flatten and increase in size, but only up to a limited diameter. The panes thus created would then be joined with lead strips and pieced together to create windows. Glazing remained, however, a great luxury up to the late Middle Ages, with royal palaces and churches the most likely buildings to have glass windows. Stained glass windows reached their peak as the Middle Ages drew to a close, with an increasing number of public buildings, inns and the homes of the wealthy fitted with clear or coloured glass decorated with historical scenes and coats of arms.

Venice
In the Middle Ages, the Italian city of Venice assumed its role as the glassmaking centre of the western world. The Venetian merchant fleet ruled the Mediterranean waves and helped supply Venice’s glass craftsmen with the technical know-how of their counterparts in Syria, and with the artistic influence of Islam. The importance of the glass industry in Venice can be seen not only in the number of craftsmen at work there (more than 8,000 at one point). A 1271 ordinance, a type of glass sector statute, laid down certain protectionist measures such as a ban on imports of foreign glass and a ban on foreign glassmakers who wished to work in Venice: non-Venetian craftsmen were themselves clearly sufficiently skilled to pose a threat.

Until the end of the 13th century, most glassmaking in Venice took place in the city itself. However, the frequent fires caused by the furnaces led the city authorities, in 1291, to order the transfer of glassmaking to the island of Murano. The measure also made it easier for the city to keep an eye on what was one of its main assets, ensuring that no glassmaking skills or secrets were exported.

In the 14th century, another important Italian glassmaking industry developed at Altare, near Genoa. Its importance lies largely in the fact that it was not subject to the strict statutes of Venice as regards the exporting of glass working skills. Thus, during the 16th century, craftsmen from Altare helped extend the new styles and techniques of Italian glass to other parts of Europe, particularly France.

In the second half of the 15th century, the craftsmen of Murano started using quartz sand and potash made from sea plants to produce particularly pure crystal. By the end of the 16th century, 3,000 of the island’s 7,000 inhabitants were involved in some way in the glassmaking industry.

Lead crystal
The development of lead crystal has been attributed to the English glassmaker George Ravenscroft (1618-1681), who patented his new glass in 1674. He had been commissioned to find a substitute for the Venetian crystal produced in Murano and based on pure quartz sand and potash. By using higher proportions of lead oxide instead of potash, he succeeded in producing a brilliant glass with a high refractive index which was very well suited for deep cutting and engraving.

Advances from France
In 1688, in France, a new process was developed for the production of plate glass, principally for use in mirrors, whose optical qualities had, until then, left much to be desired. The molten glass was poured onto a special table and rolled out flat. After cooling, the plate glass was ground on large round tables by means of rotating cast iron discs and increasingly fine abrasive sands, and then polished using felt disks. The result of this “plate pouring” process was flat glass with good optical transmission qualities. When coated on one side with a reflective, low melting metal, high-quality mirrors could be produced.

France also took steps to promote its own glass industry and attract glass experts from Venice; not an easy move for Venetians keen on exporting their abilities and know-how, given the history of discouragement of such behaviour (at one point, Venetian glass craftsmen faced death threats if they disclosed glassmaking secrets or took their skills abroad). The French court, for its part, placed heavy duties on glass imports and offered Venetian glassmakers a number of incentives: French nationality after eight years and total exemption from taxes, to name just two.

From craft to industry
It was not until the latter stages of the Industrial Revolution, however, that mechanical technology for mass production and in-depth scientific research into the relationship between the composition of glass and its physical qualities began to appear in the industry.

A key figure and one of the forefathers of modern glass research was the German scientist Otto Schott (1851-1935), who used scientific methods to study the effects of numerous chemical elements on the optical and thermal properties of glass. In the field of optical glass, Schott teamed up with Ernst Abbe (1840-1905), a professor at the University of Jena and joint owner of the Carl Zeiss firm, to make significant technological advances. Another major contributor in the evolution towards mass production was Friedrich Siemens, who invented the tank furnace. This rapidly replaced the old pot furnace and allowed the continuous production of far greater quantities of molten glass.

Increasing automation
Towards the end of the 19th century, the American engineer Michael Owens (1859-1923) invented an automatic bottle blowing machine which only arrived in Europe after the turn of the century. Owens was backed financially by E.D.L. Libbey, owner of the Libbey Glass Co. of Toledo, Ohio. By the year 1920, in the United States, there were around 200 automatic Owens Libbey Suction Blow machines operating. In Europe, smaller, more versatile machines from companies like O’Neill, Miller and Lynch were also popular.

Added impetus was given to automatic production processes in 1923 with the development of the gob feeder, which ensured the rapid supply of more consistently sized gobs in bottle production. Soon afterwards, in 1925, IS (individual section) machines were developed. Used in conjunction with the gob feeders, IS machines allowed the simultaneous production of a number of bottles from one piece of equipment. The gob feeder-IS machine combination remains the basis of most automatic glass container production today.

Modern flat glass technology
In the production of flat glass (where, as explained earlier, molten glass had previously been poured onto large tables then rolled flat into “plates”, cooled, ground and polished before being turned over and given the same treatment on the other surface), the first real innovation came in 1905 when a Belgian named Fourcault managed to vertically draw a continuous sheet of glass of a consistent width from the tank. Commercial production of sheet glass using the Fourcault process eventually got under way in 1914.

Around the end of the First World War, another Belgian engineer Emil Bicheroux developed a process whereby the molten glass was poured from a pot directly through two rollers. Like the Fourcault method, this resulted in glass with a more even thickness, and made grinding and polishing easier and more economical.

An off-shoot of evolution in flat glass production was the strengthening of glass by means of lamination (inserting a celluloid material layer between two sheets of glass). The process was invented and developed by the French scientist Edouard Benedictus, who patented his new safety glass under the name “Triplex” in 1910.

In America, Colburn developed another method for drawing sheet glass. The process was further improved with the support of the US firm Libbey-Owens and was first used for commercial production in 1917.

The Pittsburgh process, developed by the American Pennvernon and the Pittsburgh Plate Glass Company (PPG), combined and enhanced the main features of the Fourcault and Libbey-Owens processes, and has been in use since 1928.

The float process developed after the Second World War by Britain’s Pilkington Brothers Ltd., and introduced in 1959, combined the brilliant finish of sheet glass with the optical qualities of plate glass. Molten glass, when poured across the surface of a bath of molten tin, spreads and flattens before being drawn horizontally in a continuous ribbon into the annealing lehr.

Conclusion
Although this brief history comes to a close nearly 40 years ago, technological evolution naturally continues. Not yet ready to be “relegated” to a history of glass are areas such as computerized control systems, coating techniques, solar control technology and “smart matter”, the integration of micro-electronic and mechanical know-how to create glass which is able to “react” to external forces.

history of the chair

It seems that since humankind first stood up to see over the tall Savannah grasses, we’ve been looking for a place to sit back down. The historical record is not quite so succinct, however—but when early migratory peoples first settled down into a domesticated lifestyle, it appears one mark of the civilized person was a seat that elevated the body “away from the cold, damp floor” (de Dampierre 2006). By the simple act of constructing an artificial place to sit, humans began the long tradition of distinguishing themselves from the animal world. It is a form as simple as the bending of our knees and upright posture as our back, and yet that form is not so simple.

Sitting at the Dawn of Civilization

Archaeological evidence of sculptural relics at Neolithic building sites suggest chair and bench-like areas, so it appears that chairs emerged during the Stone Age. But it is not certain at what point during the expanse of time after the last Ice Age from about 10,000 B.C. to the dawn of civilization the first person crafted a seat with a back (or, alternatively, a simple platform with legs, like a stool) and then sat down on it (Crantz 1998). In addition, apart from simply elevating humans, humans of elevated status, in particular, have long been associated with the early history of chairs.

The Ancient World

It is believed that humans appeared in China as far back as 40,000 B.C., with relatively dense population patterns apparent in Mongolia by 20,000 B.C. Seats have been found in Chinese tombs but seem strictly utilitarian, and designs remain relatively unchanged through the sixteenth century, when a carpenter’s manual depicts standards of Chinese furniture in the form of woodblock prints. Records suggest that the vast majority of “the earliest Chinese did not use chairs, but instead knelt on the ground, leaning back on their heels to support their weight.”

The practice remained common through the tenth century and remains in use today in some traditional settings in Eastern Asia, where low cushions and mats are still frequently used to sit upon the floor (de Dampierre 2006). Like other civilizations, the stool—and in this case a folding stool—is considered the oldest Chinese elevated “seat.” All said, there are many ways to sit and many things upon which to sit, but the seat with a back and (most frequently) four legs is generally the Western concept known as a chair.

One need look no further than ancient Egypt for the earliest surviving physical examples of the Western world’s use of chairs. Egyptian tombs that have been unearthed contain chairs and stools from as far back as the Egyptian Old Kingdom, about 2680 B.C., well preserved by Egypt’s dry air. The most famous example dates to 1352 B.C.: the ornate throne sealed in the tomb of Pharaoh Tutankhamen, or King Tut. There is, however, hieroglyphic evidence of chair usage by all strata of society—though certainly not as pervasively as in modern society—dating back at least to the third millennium B.C. (Crantz 1998). These early examples demonstrate basic woodworking skill, which gradually gave way to advanced techniques in woodworking, including sophisticated joints, veneering, ivory and precious metal inlays, and cushioning of virtually all available materials. Indeed, “Egyptian craftsmen…created the fundamentals of all seating furniture,” including folding furniture (de Dampierre 2006).

Early in their history, chairs were largely used by higher strata of society, particularly in the form of thrones—so the simpler, backless version of the chair,the stool,was the primary seat of lower strata. Domestic furniture like the low-profile, rectangular-framed stools of ancient Egypt were “formed with a double cove construction of curved wooden slats…which pass through holes in the frame” (de Dampierre 2006).

Outside of Egypt, stelae from the Euphrates river valley in Mesopotamia depict the usage of chairs, particularly by kings, but Galen Crantz suggests the more humid climate prevented any wooden or rush-based chairs from surviving. The archaeological record from the other great early civilization across the Mediterranean—Greece and the Cyclades Islands—is similarly sparse, broken by devastating earthquakes and fires that disrupted and relocated entire civilizations. Few surviving pieces of artistry depict simple stools from the second millennium B.C., though the first cultures appeared in Greece as much as a thousand years earlier.

After a five-century gap in the archaeological record, paintings and sculpture starting from about the seventh century B.C. have been unearthed that show an evolution of design resulting in much more sophisticated furniture. As the culture evolved, Greek society’s focus on form, rhythm, precision, clarity, and proportion worked its way into all aspects of life, including furniture. Chairs, stools, and benches served all levels of society, a fact made evident by their surviving art and, more importantly, their literature (de Dampierre 2006).

Etymology

The Greek language lends to the Romantic languages a contraction of kathedra (also into Latin, cathedra), which is derived from kata, for “down,” and hedra, for “to sit.” The word passed into Middle English from the Old French chaiereand the variant chaise, which is variously in use in English for styles of chairs today (Jewell and Abate 2001). The other important related word, “throne,” arrives in the English language from the Indo-European base word dher, which means “to hold or support.” For Crantz, the distinction suggests that thrones were meant to support the privileged or royalty, while chairs, which anyone can use—in a literal physical sense—were meant to sit down. Meanwhile, in contrast to the upright back of the throne, a more reclined, relaxed, lighter Greek chair with a tilted back called the klismos found its way, alone, with commonly used stools into the next great Western civilization (1998).

The Roman Empire and the Dark Era of Chairs

In Rome, “the bed was the all-purpose piece of furniture,” a place where a Roman would not only sleep, but “eat, read, write, and socialize,” while formal dinner banquets were held upon U-shaped couches (Crantz 1998). Though more rare, chairs such as the upright thronus and the reclined cathedra were used for formal functions and lounging women, respectively. Like the work of many early civilizations, the mostly wooden pieces crafted during the Roman Empire have not survived to the present day. Existing evidence has shown that a few largely identical designs were used throughout the empire, from North Africa to Germany to Britain, and the more durable pieces incorporated various metal and stone.

Regarding hierarchy and posture in the Roman Empire, stools sufficiently supported children both in school and at the dinner table, while the father lounged on a couch and the mother sat in a chair (though later in the history of the Empire, it appears the mother reclined on a couch as well). The hierarchy also placed servants on stools, that time-honored seat of the masses. But the arrangement does speak to many cultures’ tendency to situate their royalty and their gods in a chair, seated in an upright, supported position (though there are also examples of individuals slumping as in a clismos that complicate the picture).

While modern scholars have discouraged the use of terms such as “The Dark Ages” to describe the era between the sacking of Rome and the rise of the Renaissance, chairs saw very little development during the millennium of the Middle Ages. Indeed, those who sacked Rome took no interest in their culture—so along with Rome’s myriad technological advancements, the simpler things such as their chairs also virtually vanished from the minds of civilization. Throughout the Middle Ages, chairs in the standard definition were quite scarce, and their use was limited only to masters of the household, even in the richest households. Medieval folk often improvised places to sit, from storage chests or heavy high-backed chairs with chests under the seats that were anchored against walls (to prevent theft as well as indicate status) to benches like those used in church choirs—or they simply squatted in a way that is time-honored in other societies around the world (Crantz 1998).

The Chair Revival

The fifteenth century saw a centralization of urban trade centers and governments, and with a settling of society came a settling of wealthy noblemen. These individuals and families began investing in permanent homesteads, wherein chairs became free-standing pieces of furniture with specific functions, often still reserved for the elite. The Renaissance saw a revival of Antiquity and renewal of culture, and with the refreshed outlook came more sophisticated chairs with lighter, more complex construction and classically inspired decorative motifs. The most important innovation was the lighter construction. The Italian sgabello, for example, was a low, three-legged stool-inspired chair with a high balanced backrest. Its successor in the sixteenth century added the forth leg, lowered the back and, without any chest under the seat, it meant “the age of completely portable furniture that could be moved from room to room as need had come” (de Dampierre 2006).

Meanwhile, in establishing the divine authority of royalty of the seventeenth century, the thrones of those such as Louis XIV, Queen Christina of Sweden, and Alexis I of Russia were magnificent and majestic. In the court of Louis XIV, in particular, the hierarchy of chairs was strictly regulated, the most important being the armchair—a term first used in this century (Crantz 1998), followed in order by the chair with a back, stools, and hassocks. However, “in the king’s presence most people had to remain standing. Permission to use a stool—the only seat allowed in his presence—was a coveted honor” (de Dampierre 2006).

The era of chairs in the seventeenth and eighteenth centuries saw a flourish of high style, beginning with the Baroque and moving through the Rococo, Neoclassical, and the cult of Antiquity: styles that merged evolving taste in decorative arts with the form and status of chairs. During the period of the Restoration in England (into the late eighteenth century), inlaid decorative elements and ornate carving became more common. Most importantly, at the same time, chairs were simply becoming “more common as life became more sociable” (de Dampierre 2006).

Europe and America alike focused on status in chair production beginning even in the pre-Colonial era of the United States. Stools and benches continued to be used by the masses while people of status, who could afford them, were those who purchased and used chairs up until the nineteenth century. Into the 1800s, however, chairs became more commonplace in American households, with usually enough provided for every member to sit down to dinner. Indeed, by the 1830s, factory-manufactured “fancy chairs” such as those by Sears, Roebuck, and Co. allowed families to purchase machined sets. The Industrial Revolution became the great democratizer of the once-elite chair (Crantz 1998).

La Chaise Moderne

The twentieth century saw a range of intriguing chair design influenced by the various artistic movements, beginning with Art Nouveau and Art Deco. Art Deco emerged with the Machine Age, which included austerely styled but well-crafted pieces by names like Le Corbusier and Bauhaus. Additional artistic styles that worked their way into chair design included Cubism, Surrealism, the Baroque, and a “primitive” style that looked to the “timeless innovation” of the long (but sparsely recorded) history of African chairs and stools, particularly “those for ritual, political, or symbolic use” (de Dampierre 2006).

The inter-war period, in particular, saw a flourishing of unique style prompted by the passionate Modernist pursuit of proper form. The chair was based upon “the sociological expression of modern values” and, as a result, those early twentieth-century chairs have become admired classics, particularly among designers, but with varying influence on the public (Crantz 1998). Artistic movements of the early 1900s have been variously adapted by contemporary designers both as a retrospective and to meet the demands of consumers, whose interests cover styles across the decades. The other side of the picture still portrays a “non-aesthetic aesthetic” of chair ergonomics, or design that favors function over form, to the physiological benefit of the consumer (de Dampierre 2006).

Somewhere in the middle of social theory and ergonomics exists the ideal chair. But when the wide array of applicable theories and artistic sensibilities combine with a world of distinct cultural aesthetics, the perfect chair is as individual as the person designing it. It could be an ergonomically perfect model designed for a day of productivity in the office, or a simple barstool, efficient in design but fully functional and serving a specific purpose implicit in its name. Meanwhile, outlets such as Ikea and World Market display chairs at the opposite ends of the design spectrum, from the stylish but inexpensive manufactured chair to the fairly traded, hand-crafted artisan imports from around the world.

Finally, one chair, perhaps, has outdone them all: the ubiquitous one-piece polypropylene plastic chair, that three-sixteenth-of-an-inch “resin chair” which is manufactured around the world and shows up in virtually every imaginable setting. At first glance, it is a practical, tacky piece that is more an after-thought than an object of attention. But as even the Smithsonian Institute has avowed, the resin chair is inextricably tied to the history of chairs, incorporating the postwar pursuit of progressive design with ease of manufacture, portability, and basic comfort (Gosnell 2004).

The electric one


n 1881, capital punishment was in common use in the United States–but that usually meant hanging, or occasionally a firing squad. Enter New York dentist Albert Southwick, who saw an old drunk accidentally electrocute himself on a power generator with no visible pain. He told a friend in the legislature, and the idea of executing people using the modern marvel of electricity began to take hold.

The Truth About Cats and Dogs:

The forerunner of commercial electricity was Thomas Edison, whose direct current (DC) approach was safer than George Westinghouse’s newer alternating current (AC) technology–but inferior in every other discernible respect. To protect the safety of the American public (and his commercial interests), Edison held a demonstration in which he used a 1,000-volt alternating current generator to kill cats, dogs, and a large horse. Waiting in the wings were legislators eager to adopt Southwick’s vision of a humane, electricity-based form of execution.

Chapter 489:

Southwick soon became part of a New York legislative panel charged with the goal of eliminating gruesome forms of execution by replacing them with electrocution. In 1888, before the electric chair had technically been invented, the State of New York added Chapter 489 to its state code–establishing electrocution as the state’s official execution method.

The Strange Death of William Kemmler:

In March of 1889, William Kemmler murdered his lover Matilda Ziegler. He was sentenced to die two months later in Auburn Prison’s electric chair, the first in the country. It took eight agonizing minutes to kill him but it did the job, and electrocution soon became the most widely used method of legal execution in the United States.

The Mercy Seat:

Between 1890 and 1973, over 4,000 people were executed in the electric chair–from infamous murderers to accused traitors to railroaded black defendants in the South. Perhaps the most famous electric chair executions of this era were those of Bruno Hauptmann (1936), the alleged murderer of the Lindbergh baby, and Julius and Ethel Rosenberg (1953), alleged spies for the Soviet Union.

An Outdated Method:

After the death penalty came back from a four-year moratorium in 1977, the electric chair began to be replaced replaced by the gas chamber and lethal injection; only 154 people were put to death by electrocution. (Among them: Ted Bundy.) Gruesome botched executions, in which faulty equipment tortured prisoners to death or simply burned them alive, became almost routine–most notably in Florida, where the equipment was not well maintained and was occasionally used by undertrained staff.

The Nebraska Case:

By February 2008, the electric chair had mostly become a novelty. Only one state–Nebraska–still used it as a primary method of execution, all others having relegated it to optional status for prisoners who wanted a more distinctive death. So when the Nebraska Supreme Court ruled that the electric chair constituted death by torture, there was little outcry. If the electric chair is ever used to execute prisoners again, it will most likely be on an extremely small-scale basis.

The Electric Chair Outside of the United States:

The electric chair is sometimes used in the Philippines, but no other country on Earth currently uses it as an execution method.
The rocking one

Although the first rocking chairs in existence were believed to have originated around the early to mid 1700’s in England, the rocking chair was originally utilized as a garden sitting chair. The Windsor rocking chair was called such because of it place of origination, Windsor Castle, around the beginning of the 1700’s. It was a wooden rocking chair, wood being the easiest medium through which to create such a piece of furniture, and since its creation has given birth to many new variations of the rocking chair; such as the glider rocking chair. In the timeline of rocking chairs, the wicker rocking chair came after the creation of the Windsor, which was mainly a wood rocking chair with a heavily rounded hoop back with spindles that gave it the appearance of a bird cage.

Just as outdoor rocking chairs and porch rocking chairs are popular today, in the first days of rocking chairs people enjoyed the relaxing, undulating back and forth motion while taking in the beauty of nature in the garden. This gentle motion created by the rocking chair is found soothing by many, similar to the motions of a swaying cradle. Often outdoor rocking chairs are displayed on porches and have since become an American standard for relaxation and household outdoor enjoyment.

After its creation in England, the rocking chair was said to have been become mainstream in America around 1750. Benjamin Franklin is often widely credited with the first wood rocking chair creation, which he made by simply modifying an existing chair and adding gliders to it. Benjamin Franklin apparently adopted the bowed rockers from a baby’s cradle to fit the design of an ordinary chair and thus, the rocking chair was born. However, historians have had varying opinions on the validity of historical evidence to support Mr. Franklin as the originator in America. We do however know that both the rocking horse and the rocking cradle predate the rocking chair. By the end of the 18th century, rocking chairs became the most common type of outdoor porch furniture. The rocking chair soon became a mainstream household object, with the adult rocking chair being used as a symbol of status for many grandparents and heads within families during the early 19th century in the mid west.

The rocking chair gains its ability to rock in such a way because of a unique feature, the wooden rocking chair only makes contact with the floor in two places at any one point in time. Without this feature, the wood rocking chair would not be a rocking chair at all; however one could easily classify it as a stool, bench, ottoman, or just a plain old chair. There is an ergonomic benefit associated with rocking chairs as well. Due to the center of gravity of the user being met and the angle utilized, the rocking chair leaves its user at an almost weightless state.

In the late 1800’s, the first lightweight rocking chair called the bentwood rocking chair was crafted by German craftsman named Michael Thonet. He utilized a type of steamed wood, which he then bent and manipulated to achieve the graceful look of the bentwood rocking chair. With their high affordability and lightweight but beautiful designs, the bentwood rocking chair became an extremely popular among outdoor rocking chairs across America and the rest of the world.

The modern rocking chair design has been pushed to the limit. Everything from glider rocking chairs, to portable children’s rocking chairs, and even a high tech rocking chair called the Gravitron have been inspired by the classic wooden rocking chair. It is hard to further perfect the simple design associated with rocking chairs, but the mediums through which they have been designed since their creation has vastly differed. From high tech steel rocking chairs, to rocking surfboard chairs, to even high tech collapsible rocking chairs; the rocking chair has undergone transformations over the years, but all in all has withstood the test of time.