Interstellar Rift Alpha 25a Williston

Interstellar Rift Alpha 25a Williston Average ratng: 5,5/10 9680 votes

Chain of Craters Road is a long winding paved road through the East Rift and coastal area of the Hawaii Volcanoes National Park on the island of Hawaii, in the state of Hawaii, United States. New!!: Hydrogen sulfide and Chain of Craters Road See more » Chalcogen. The chalcogens are the chemical elements in group 16 of the periodic table. Get an ad-free experience with special benefits, and directly support Reddit.

WillistonPermalink

Join GitHub today

Interstellar Rift Alpha 25a Williston Vt

GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.

Sign up
Find file Copy path
Interstellar Rift Alpha 25a Williston
Cannot retrieve contributors at this time
<doc url='http://en.wikipedia.org/wiki?curid=1207'>
Amino acid
Amino acids (, , or ) are biologically important organic compounds composed of amine (-NH2) and carboxylic acid (-COOH) functional groups, along with a side-chain specific to each amino acid. The key elements of an amino acid are carbon, hydrogen, oxygen, and nitrogen, though other elements are found in the side-chains of certain amino acids. About 500 amino acids are known and can be classified in many ways. They can be classified according to the core structural functional groups' locations as alpha- (α-), beta- (β-), gamma- (γ-) or delta- (δ-) amino acids; other categories relate to polarity, pH level, and side-chain group type (aliphatic, acyclic, aromatic, containing hydroxyl or sulfur, etc.). In the form of proteins, amino acids comprise the second-largest component (water is the largest) of human muscles, cells and other tissues. Outside proteins, amino acids perform critical roles in processes such as neurotransmitter transport and biosynthesis.
In biochemistry, amino acids having both the amine and the carboxylic acid groups attached to the first (alpha-) carbon atom have particular importance. They are known as 2-, alpha-, or α-amino acids (generic formula H2NCHRCOOH in most cases where R is an organic substituent known as a 'side-chain'); often the term 'amino acid' is used to refer specifically to these. They include the 23 proteinogenic ('protein-building') amino acids, which combine into peptide chains ('polypeptides') to form the building-blocks of a vast array of proteins. These are all L-stereoisomers ('left-handed' isomers), although a few D-amino acids ('right-handed') occur in bacterial envelopes and some antibiotics. Twenty of the proteinogenic amino acids are encoded directly by triplet codons in the genetic code and are known as 'standard' amino acids. The other three ('non-standard' or 'non-canonical') are selenocysteine (present in many noneukaryotes as well as most eukaryotes, but not coded directly by DNA), pyrrolysine (found only in some archea and one bacterium) and N-formylmethionine (which is often the initial amino acid of proteins in bacteria, mitochondria, and chloroplasts). Pyrrolysine and selenocysteine are encoded via variant codons; for example, selenocysteine is encoded by stop codon and SECIS element. Codon–tRNA combinations not found in nature can also be used to 'expand' the genetic code and create novel proteins known as alloproteins incorporating non-proteinogenic amino acids.
Many important proteinogenic and non-proteinogenic amino acids also play critical non-protein roles within the body. For example, in the human brain, glutamate (standard glutamic acid) and gamma-amino-butyric acid ('GABA', non-standard gamma-amino acid) are, respectively, the main excitatory and inhibitory neurotransmitters; hydroxyproline (a major component of the connective tissue collagen) is synthesised from proline; the standard amino acid glycine is used to synthesise porphyrins used in red blood cells; and the non-standard carnitine is used in lipid transport.
Nine proteinogenic amino acids are called 'essential' for humans because they cannot be created from other compounds by the human body and, so, must be taken in as food. Others may be conditionally essential for certain ages or medical conditions. Essential amino acids may also differ between species.
Because of their biological significance, amino acids are important in nutrition and are commonly used in nutritional supplements, fertilizers, and food technology. Industrial uses include the production of drugs, biodegradable plastics, and chiral catalysts.
History.
The first few amino acids were discovered in the early 19th century. In 1806, French chemists Louis-Nicolas Vauquelin and Pierre Jean Robiquet isolated a compound in asparagus that was subsequently named asparagine, the first amino acid to be discovered. Cystine was discovered in 1810, although its monomer, cysteine, remained undiscovered until 1884. Glycine and leucine were discovered in 1820. Usage of the term 'amino acid' in the English language is from 1898. Proteins were found to yield amino acids after enzymatic digestion or acid hydrolysis. In 1902, Emil Fischer and Franz Hofmeister proposed that proteins are the result of the formation of bonds between the amino group of one amino acid with the carboxyl group of another, in a linear structure that Fischer termed peptide.
General structure.
In the structure shown at the top of the page, R represents a side-chain specific to each amino acid. The carbon atom next to the carboxyl group is called the α–carbon and amino acids with a side-chain bonded to this carbon are referred to as 'alpha amino acids'. These are the most common form found in nature. In the alpha amino acids, the α–carbon is a chiral carbon atom, with the exception of glycine. In amino acids that have a carbon chain attached to the α–carbon (such as lysine, shown to the right) the carbons are labeled in order as α, β, γ, δ, and so on. In some amino acids, the amine group is attached to the β or γ-carbon, and these are therefore referred to as 'beta' or 'gamma amino acids'.
Amino acids are usually classified by the properties of their side-chain into four groups. The side-chain can make an amino acid a weak acid or a weak base, and a hydrophile if the side-chain is polar or a hydrophobe if it is nonpolar. The chemical structures of the 22 standard amino acids, along with their chemical properties, are described more fully in the article on these proteinogenic amino acids.
The phrase 'branched-chain amino acids' or BCAA refers to the amino acids having aliphatic side-chains that are non-linear; these are leucine, isoleucine, and valine. Proline is the only proteinogenic amino acid whose side-group links to the α-amino group and, thus, is also the only proteinogenic amino acid containing a secondary amine at this position. In chemical terms, proline is, therefore, an imino acid, since it lacks a primary amino group, although it is still classed as an amino acid in the current biochemical nomenclature, and may also be called an 'N-alkylated alpha-amino acid'.
Isomerism.
Of the standard α-amino acids, all but glycine can exist in either of two enantiomers, called or amino acids, which are mirror images of each other ('see also Chirality'). While -amino acids represent all of the amino acids found in proteins during translation in the ribosome, -amino acids are found in some proteins produced by enzyme posttranslational modifications after translation and translocation to the endoplasmic reticulum, as in exotic sea-dwelling organisms such as cone snails. They are also abundant components of the peptidoglycan cell walls of bacteria, and -serine may act as a neurotransmitter in the brain. -amino acids are used in racemic crystallography to create centrosymmetric crystals, which (depending on the protein) may allow for easier and more robust protein structure determination. The and convention for amino acid configuration refers not to the optical activity of the amino acid itself but rather to the optical activity of the isomer of glyceraldehyde from which that amino acid can, in theory, be synthesized (-glyceraldehyde is dextrorotatory; -glyceraldehyde is levorotatory).
In alternative fashion, the '(S)' and '(R)' designators are used to indicate the absolute stereochemistry. Almost all of the amino acids in proteins are '(S)' at the α carbon, with cysteine being '(R)' and glycine non-chiral. Cysteine is unusual since it has a sulfur atom at the second position in its side-chain, which has a larger atomic mass than the groups attached to the first carbon, which is attached to the α-carbon in the other standard amino acids, thus the '(R)' instead of '(S)'.
Zwitterions.
The amine and carboxylic acid functional groups found in amino acids allow them to have amphiprotic properties. Carboxylic acid groups (−CO2H) can be deprotonated to become negative carboxylates (−CO2− ), and α-amino groups (NH2−) can be protonated to become positive α-ammonium groups (+NH3−). At pH values greater than the pKa of the carboxylic acid group (mean for the 20 common amino acids is about 2.2, see the table of amino acid structures above), the negative carboxylate ion predominates. At pH values lower than the pKa of the α-ammonium group (mean for the 20 common α-amino acids is about 9.4), the nitrogen is predominantly protonated as a positively charged α-ammonium group. Thus, at pH between 2.2 and 9.4, the predominant form adopted by α-amino acids contains a negative carboxylate and a positive α-ammonium group, as shown in structure (2) on the right, so has net zero charge. This molecular state is known as a zwitterion, from the German Zwitter meaning 'hermaphrodite' or 'hybrid'. Below pH 2.2, the predominant form will have a neutral carboxylic acid group and a positive α-ammonium ion (net charge +1), and above pH 9.4, a negative carboxylate and neutral α-amino group (net charge −1). The fully neutral form (structure (1) on the right) is a very minor species in aqueous solution throughout the pH range (less than 1 part in 107). Amino acids exist as zwitterions also in the solid phase, and crystallize with salt-like properties unlike typical organic acids or amines.
Isoelectric point.
The variation in titration curves when the amino acids are grouped by category can be seen here. With the exception of tyrosine, using titration to differentiate between hydrophobic amino acids is problematic.
At pH values between the two pKa values, the zwitterion predominates, but coexists in dynamic equilibrium with small amounts of net negative and net positive ions. At the exact midpoint between the two pKa values, the trace amount of net negative and trace of net positive ions exactly balance, so that average net charge of all forms present is zero. This pH is known as the isoelectric point pI, so pI = ½(pKa1 + pKa2). The individual amino acids all have slightly different pKa values, so have different isoelectric points. For amino acids with charged side-chains, the pKa of the side-chain is involved. Thus for Asp, Glu with negative side-chains, pI = ½(pKa1 + pKaR), where pKaR is the side-chain pKa. Cysteine also has potentially negative side-chain with pKaR = 8.14, so pI should be calculated as for Asp and Glu, even though the side-chain is not significantly charged at neutral pH. For His, Lys, and Arg with positive side-chains, pI = ½(pKaR + pKa2). Amino acids have zero mobility in electrophoresis at their isoelectric point, although this behaviour is more usually exploited for peptides and proteins than single amino acids. Zwitterions have minimum solubility at their isoelectric point and some amino acids (in particular, with non-polar side-chains) can be isolated by precipitation from water by adjusting the pH to the required isoelectric point.
Occurrence and functions in biochemistry.
Essential amino acids.
Amino acids are the structural units (monomers) that make up proteins. They join together to form short polymer chains called peptides or longer chains called either polypeptides or proteins. These polymers are linear and unbranched, with each amino acid within the chain attached to two neighboring amino acids. The process of making proteins is called 'translation' and involves the step-by-step addition of amino acids to a growing protein chain by a ribozyme that is called a ribosome. The order in which the amino acids are added is read through the genetic code from an mRNA template, which is a RNA copy of one of the organism's genes.
Twenty-three amino acids are naturally incorporated into polypeptides and are called proteinogenic or natural amino acids. Of these, 21 are encoded by the universal genetic code. The remaining 2, selenocysteine and pyrrolysine, are incorporated into proteins by unique synthetic mechanisms. Selenocysteine is incorporated when the mRNA being translated includes a SECIS element, which causes the UGA codon to encode selenocysteine instead of a stop codon. Pyrrolysine is used by some methanogenic archaea in enzymes that they use to produce methane. It is coded for with the codon UAG, which is normally a stop codon in other organisms. This UAG codon is followed by a PYLIS downstream sequence.
Non-proteinogenic amino acids.
Aside from the 23 proteinogenic amino acids, there are many other amino acids that are called 'non-proteinogenic'. Those either are not found in proteins (for example carnitine, GABA) or are not produced directly and in isolation by standard cellular machinery (for example, hydroxyproline and selenomethionine).
Non-proteinogenic amino acids that are found in proteins are formed by post-translational modification, which is modification after translation during protein synthesis. These modifications are often essential for the function or regulation of a protein; for example, the carboxylation of glutamate allows for better binding of calcium cations, and the hydroxylation of proline is critical for maintaining connective tissues. Another example is the formation of hypusine in the translation initiation factor EIF5A, through modification of a lysine residue. Such modifications can also determine the localization of the protein, e.g., the addition of long hydrophobic groups can cause a protein to bind to a phospholipid membrane.
Some non-proteinogenic amino acids are not found in proteins. Examples include lanthionine, 2-aminoisobutyric acid, dehydroalanine, and the neurotransmitter gamma-aminobutyric acid. Non-proteinogenic amino acids often occur as intermediates in the metabolic pathways for standard amino acids – for example, ornithine and citrulline occur in the urea cycle, part of amino acid catabolism (see below). A rare exception to the dominance of α-amino acids in biology is the β-amino acid beta alanine (3-aminopropanoic acid), which is used in plants and microorganisms in the synthesis of pantothenic acid (vitamin B5), a component of coenzyme A.
Non-standard amino acids.
The 20 amino acids that are encoded directly by the codons of the universal genetic code are called 'standard' or 'canonical' amino acids. The others are called 'non-standard' or 'non-canonical'. Most of the non-standard amino acids are also non-proteinogenic (i.e. they cannot be used to build proteins), but three of them are proteinogenic, as they can be used to build proteins by exploiting information not encoded in the universal genetic code.
The three non-standard proteinogenic amino acids are selenocysteine (present in many noneukaryotes as well as most eukaryotes, but not coded directly by DNA), pyrrolysine (found only in some archea and one bacterium), and N-formylmethionine (which is often the initial amino acid of proteins in bacteria, mitochondria, and chloroplasts). For example, 25 human proteins include selenocysteine (Sec) in their primary structure, and the structurally characterized enzymes (selenoenzymes) employ Sec as the catalytic moiety in their active sites. Pyrrolysine and selenocysteine are encoded via variant codons. For example, selenocysteine is encoded by stop codon and SECIS element.
In human nutrition.
When taken up into the human body from the diet, the 22 standard amino acids either are used to synthesize proteins and other biomolecules or are oxidized to urea and carbon dioxide as a source of energy. The oxidation pathway starts with the removal of the amino group by a transaminase; the amino group is then fed into the urea cycle. The other product of transamidation is a keto acid that enters the citric acid cycle. Glucogenic amino acids can also be converted into glucose, through gluconeogenesis.
Pyrrolysine trait is restricted to several microbes, and only one organism has both Pyl and Sec. Of the 22 standard amino acids, 9 are called essential amino acids because the human body cannot synthesize them from other compounds at the level needed for normal growth, so they must be obtained from food. In addition, cysteine, taurine, tyrosine, and arginine are considered semiessential amino-acids in children (though taurine is not technically an amino acid), because the metabolic pathways that synthesize these amino acids are not fully developed. The amounts required also depend on the age and health of the individual, so it is hard to make general statements about the dietary requirement for some amino acids.
(*) Essential only in certain cases.
Classification.
Although there are many ways to classify amino acids, these molecules can be assorted into six main groups, on the basis of their structure and the general chemical characteristics of their R groups.
Non-protein functions.
In humans, non-protein amino acids also have important roles as metabolic intermediates, such as in the biosynthesis of the neurotransmitter gamma-amino-butyric acid (GABA). Many amino acids are used to synthesize other molecules, for example:
However, not all of the functions of other abundant non-standard amino acids are known.
Some non-standard amino acids are used as defenses against herbivores in plants. For example, canavanine is an analogue of arginine that is found in many legumes, and in particularly large amounts in 'Canavalia gladiata' (sword bean). This amino acid protects the plants from predators such as insects and can cause illness in people if some types of legumes are eaten without processing. The non-protein amino acid mimosine is found in other species of legume, in particular 'Leucaena leucocephala'. This compound is an analogue of tyrosine and can poison animals that graze on these plants.
Uses in industry.
Amino acids are used for a variety of applications in industry, but their main use is as additives to animal feed. This is necessary, since many of the bulk components of these feeds, such as soybeans, either have low levels or lack some of the essential amino acids: Lysine, methionine, threonine, and tryptophan are most important in the production of these feeds. In this industry, amino acids are also used to chelate metal cations in order to improve the absorption of minerals from supplements, which may be required to improve the health or production of these animals.
The food industry is also a major consumer of amino acids, in particular, glutamic acid, which is used as a flavor enhancer, and Aspartame (aspartyl-phenylalanine-1-methyl ester) as a low-calorie artificial sweetener. Similar technology to that used for animal nutrition is employed in the human nutrition industry to alleviate symptoms of mineral deficiencies, such as anemia, by improving mineral absorption and reducing negative side effects from inorganic mineral supplementation.
The chelating ability of amino acids has been used in fertilizers for agriculture to facilitate the delivery of minerals to plants in order to correct mineral deficiencies, such as iron chlorosis. These fertilizers are also used to prevent deficiencies from occurring and improving the overall health of the plants. The remaining production of amino acids is used in the synthesis of drugs and cosmetics.
Expanded genetic code.
Since 2001, 40 non-natural amino acids have been added into protein by creating a unique codon (recoding) and a corresponding transfer-RNA:aminoacyl – tRNA-synthetase pair to encode it with diverse physicochemical and biological properties in order to be used as a tool to exploring protein structure and function or to create novel or enhanced proteins.
Nullomers.
Nullomers are codons that in theory code for an amino acid, however in nature there is a selective bias against using this codon in favor of another, for example bacteria prefer to use CGA instead of AGA to code for arginine. This creates some sequences that do not appear in the genome. This characteristic can be taken advantage of and used to create new selective cancer-fighting drugs and to prevent cross-contamination of DNA samples from crime-scene investigations.
Chemical building blocks.
Amino acids are important as low-cost feedstocks. These compounds are used in chiral pool synthesis as enantiomerically pure building-blocks.
Amino acids have been investigated as precursors chiral catalysts, e.g., for asymmetric hydrogenation reactions, although no commercial applications exist.
Biodegradable plastics.
Amino acids are under development as components of a range of biodegradable polymers. These materials have applications as environmentally friendly packaging and in medicine in drug delivery and the construction of prosthetic implants. These polymers include polypeptides, polyamides, polyesters, polysulfides, and polyurethanes with amino acids either forming part of their main chains or bonded as side-chains. These modifications alter the physical properties and reactivities of the polymers. An interesting example of such materials is polyaspartate, a water-soluble biodegradable polymer that may have applications in disposable diapers and agriculture. Due to its solubility and ability to chelate metal ions, polyaspartate is also being used as a biodegradeable anti-scaling agent and a corrosion inhibitor. In addition, the aromatic amino acid tyrosine is being developed as a possible replacement for toxic phenols such as bisphenol A in the manufacture of polycarbonates.
Reactions.
As amino acids have both a primary amine group and a primary carboxyl group, these chemicals can undergo most of the reactions associated with these functional groups. These include nucleophilic addition, amide bond formation, and imine formation for the amine group, and esterification, amide bond formation, and decarboxylation for the carboxylic acid group. The combination of these functional groups allow amino acids to be effective polydentate ligands for metal-amino acid chelates.
The multiple side-chains of amino acids can also undergo chemical reactions. The types of these reactions are determined by the groups on these side-chains and are, therefore, different between the various types of amino acid.
Chemical synthesis.
Several methods exist to synthesize amino acids. One of the oldest methods begins with the bromination at the α-carbon of a carboxylic acid. Nucleophilic substitution with ammonia then converts the alkyl bromide to the amino acid. In alternative fashion, the Strecker amino acid synthesis involves the treatment of an aldehyde with potassium cyanide and ammonia, this produces an α-amino nitrile as an intermediate. Hydrolysis of the nitrile in acid then yields a α-amino acid. Using ammonia or ammonium salts in this reaction gives unsubstituted amino acids, whereas substituting primary and secondary amines will yield substituted amino acids. Likewise, using ketones, instead of aldehydes, gives α,α-disubstituted amino acids. The classical synthesis gives racemic mixtures of α-amino acids as products, but several alternative procedures using asymmetric auxiliaries or asymmetric catalysts have been developed.
At the current time, the most-adopted method is an automated synthesis on a solid support (e.g., polystyrene beads), using protecting groups (e.g., Fmoc and t-Boc) and activating groups (e.g., DCC and DIC).
Peptide bond formation.
As both the amine and carboxylic acid groups of amino acids can react to form amide bonds, one amino acid molecule can react with another and become joined through an amide linkage. This polymerization of amino acids is what creates proteins. This condensation reaction yields the newly formed peptide bond and a molecule of water. In cells, this reaction does not occur directly; instead, the amino acid is first activated by attachment to a transfer RNA molecule through an ester bond. This aminoacyl-tRNA is produced in an ATP-dependent reaction carried out by an aminoacyl tRNA synthetase. This aminoacyl-tRNA is then a substrate for the ribosome, which catalyzes the attack of the amino group of the elongating protein chain on the ester bond. As a result of this mechanism, all proteins made by ribosomes are synthesized starting at their N-terminus and moving toward their C-terminus.
However, not all peptide bonds are formed in this way. In a few cases, peptides are synthesized by specific enzymes. For example, the tripeptide glutathione is an essential part of the defenses of cells against oxidative stress. This peptide is synthesized in two steps from free amino acids. In the first step, gamma-glutamylcysteine synthetase condenses cysteine and glutamic acid through a peptide bond formed between the side-chain carboxyl of the glutamate (the gamma carbon of this side-chain) and the amino group of the cysteine. This dipeptide is then condensed with glycine by glutathione synthetase to form glutathione.
In chemistry, peptides are synthesized by a variety of reactions. One of the most-used in solid-phase peptide synthesis uses the aromatic oxime derivatives of amino acids as activated units. These are added in sequence onto the growing peptide chain, which is attached to a solid resin support. The ability to easily synthesize vast numbers of different peptides by varying the types and order of amino acids (using combinatorial chemistry) has made peptide synthesis particularly important in creating libraries of peptides for use in drug discovery through high-throughput screening.
Biosynthesis.
In plants, nitrogen is first assimilated into organic compounds in the form of glutamate, formed from alpha-ketoglutarate and ammonia in the mitochondrion. In order to form other amino acids, the plant uses transaminases to move the amino group to another alpha-keto carboxylic acid. For example, aspartate aminotransferase converts glutamate and oxaloacetate to alpha-ketoglutarate and aspartate. Other organisms use transaminases for amino acid synthesis, too.
Nonstandard amino acids are usually formed through modifications to standard amino acids. For example, homocysteine is formed through the transsulfuration pathway or by the demethylation of methionine via the intermediate metabolite S-adenosyl methionine, while hydroxyproline is made by a posttranslational modification of proline.
Microorganisms and plants can synthesize many uncommon amino acids. For example, some microbes make 2-aminoisobutyric acid and lanthionine, which is a sulfide-bridged derivative of alanine. Both of these amino acids are found in peptidic lantibiotics such as alamethicin. However, in plants, 1-aminocyclopropane-1-carboxylic acid is a small disubstituted cyclic amino acid that is a key intermediate in the production of the plant hormone ethylene.
Catabolism.
Amino acids must first pass out of organelles and cells into blood circulation via amino acid transporters, since the amine and carboxylic acid groups are typically ionized. Degradation of an amino acid, occurring in the liver and kidneys, often involves deamination by moving its amino group to alpha-ketoglutarate, forming glutamate. This process involves transaminases, often the same as those used in amination during synthesis. In many vertebrates, the amino group is then removed through the urea cycle and is excreted in the form of urea. However, amino acid degradation can produce uric acid or ammonia instead. For example, serine dehydratase converts serine to pyruvate and ammonia. After removal of one or more amino groups, the remainder of the molecule can sometimes be used to synthesize new amino acids, or it can be used for energy by entering glycolysis or the citric acid cycle, as detailed in image at right.
Physicochemical properties of amino acids.
The 20 amino acids encoded directly by the genetic code can be divided into several groups based on their properties. Important factors are charge, hydrophilicity or hydrophobicity, size, and functional groups. These properties are important for protein structure and protein–protein interactions. The water-soluble proteins tend to have their hydrophobic residues (Leu, Ile, Val, Phe, and Trp) buried in the middle of the protein, whereas hydrophilic side-chains are exposed to the aqueous solvent. The integral membrane proteins tend to have outer rings of exposed hydrophobic amino acids that anchor them into the lipid bilayer. In the case part-way between these two extremes, some peripheral membrane proteins have a patch of hydrophobic amino acids on their surface that locks onto the membrane. In similar fashion, proteins that have to bind to positively charged molecules have surfaces rich with negatively charged amino acids like glutamate and aspartate, while proteins binding to negatively charged molecules have surfaces rich with positively charged chains like lysine and arginine. There are different hydrophobicity scales of amino acid residues.
Some amino acids have special properties such as cysteine, that can form covalent disulfide bonds to other cysteine residues, proline that forms a cycle to the polypeptide backbone, and glycine that is more flexible than other amino acids.
Many proteins undergo a range of posttranslational modifications, when additional chemical groups are attached to the amino acids in proteins. Some modifications can produce hydrophobic lipoproteins, or hydrophilic glycoproteins. These type of modification allow the reversible targeting of a protein to a membrane. For example, the addition and removal of the fatty acid palmitic acid to cysteine residues in some signaling proteins causes the proteins to attach and then detach from cell membranes.
Table of standard amino acid abbreviations and properties.
Two additional amino acids are in some species coded for by codons that are usually interpreted as stop codons:
In addition to the specific amino acid codes, placeholders are used in cases where chemical or crystallographic analysis of a peptide or protein cannot conclusively determine the identity of a residue.
Unk is sometimes used instead of Xaa, but is less standard.
In addition, many non-standard amino acids have a specific code. For example, several peptide drugs, such as Bortezomib and MG132, are artificially synthesized and retain their protecting groups, which have specific codes. Bortezomib is Pyz-Phe-boroLeu, and MG132 is Z-Leu-Leu-Leu-al. To aid in the analysis of protein structure, photo-reactive amino acid analogs are available. These include photoleucine (pLeu) and photomethionine (pMet).
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1208'>
Alan Turing
Alan Mathison Turing, , ( ; 23 June 1912 – 7 June 1954) was a British mathematician, logician, cryptanalyst, philosopher, computer scientist, mathematical biologist, and marathon and ultra distance runner. He was highly influential in the development of computer science, providing a formalisation of the concepts of 'algorithm' and 'computation' with the Turing machine, which can be considered a model of a general purpose computer. Turing is widely considered to be the father of theoretical computer science and artificial intelligence.
During World War II, Turing worked for the Government Code and Cypher School (GC&CS) at Bletchley Park, Britain's codebreaking centre. For a time he led Hut 8, the section responsible for German naval cryptanalysis. He devised a number of techniques for breaking German ciphers, including improvements to the pre-war Polish bombe method, an electromechanical machine that could find settings for the Enigma machine. Winston Churchill said that Turing made the single biggest contribution to Allied victory in the war against Nazi Germany. Turing's pivotal role in cracking intercepted coded messages enabled the Allies to defeat the Nazis in several crucial battles.
After the war, he worked at the National Physical Laboratory, where he designed the ACE, among the first designs for a stored-program computer. In 1948 Turing joined Max Newman's Computing Laboratory at Manchester University, where he assisted development of the Manchester computers and became interested in mathematical biology. He wrote a paper on the chemical basis of morphogenesis, and predicted oscillating chemical reactions such as the Belousov–Zhabotinsky reaction, first observed in the 1960s.
Turing was prosecuted for homosexuality in 1952, when such acts were still criminalised in the UK. He accepted treatment with oestrogen injections (chemical castration) as an alternative to prison. Turing died in 1954, 16 days before his 42nd birthday, from cyanide poisoning. An inquest determined his death a suicide; his mother and some others believed it was accidental. On 10 September 2009, following an Internet campaign, British Prime Minister Gordon Brown made an official public apology on behalf of the British government for 'the appalling way he was treated.' The Queen granted him a posthumous pardon on 24 December 2013.
Early life and career.
Turing was born in Paddington, London, while his father was on leave from his position with the Indian Civil Service (ICS) at Chhatrapur, Bihar and Orissa Province, in British India. Turing's father, Julius Mathison Turing (1873–1947), was the son of a clergyman from a Scottish family of merchants which had been based in the Netherlands and included a baronet. Julius's wife, Alan's mother, was Ethel Sara (née Stoney; 1881–1976), daughter of Edward Waller Stoney, chief engineer of the Madras Railways. The Stoneys were a Protestant Anglo-Irish gentry family from both County Tipperary and County Longford, while Ethel herself had spent much of her childhood in County Clare. Julius' work with the ICS brought the family to British India, where his grandfather had been a general in the Bengal Army. However, both Julius and Ethel wanted their children to be brought up in England, so they moved to Maida Vale, London, where Turing was born on 23 June 1912, as recorded by a blue plaque on the outside of the house of his birth, later the Colonnade Hotel. He had an elder brother, John (the father of Sir John Dermot Turing, 12th Baronet of the Turing baronets).
His father's civil service commission was still active, and during Turing's childhood years his parents travelled between Hastings in England and India, leaving their two sons to stay with a retired Army couple. At Hastings, Turing stayed at Baston Lodge, Upper Maze Hill, St Leonards-on-Sea, now marked with a blue plaque.
Very early in life, Turing showed signs of the genius he was later to display prominently. His parents purchased a house in Guildford in 1927, and Turing lived there during school holidays. The location is also marked with a blue plaque.
His parents enrolled him at St Michael's, a day school at 20 Charles Road, St Leonards-on-Sea, at the age of six. The headmistress recognised his talent early on, as did many of his subsequent educators. In 1926, at the age of 13, he went on to Sherborne School, a well known independent school in the market town of Sherborne in Dorset. The first day of term coincided with the 1926 General Strike in Britain, but so determined was he to attend that he rode his bicycle unaccompanied more than from Southampton to Sherborne, stopping overnight at an inn.
Turing's natural inclination toward mathematics and science did not earn him respect from some of the teachers at Sherborne, whose definition of education placed more emphasis on the classics. His headmaster wrote to his parents: 'I hope he will not fall between two stools. If he is to stay at public school, he must aim at becoming 'educated'. If he is to be solely a 'Scientific Specialist', he is wasting his time at a public school'. Despite this, Turing continued to show remarkable ability in the studies he loved, solving advanced problems in 1927 without having studied even elementary calculus. In 1928, aged 16, Turing encountered Albert Einstein's work; not only did he grasp it, but he extrapolated Einstein's questioning of Newton's laws of motion from a text in which this was never made explicit.
At Sherborne, Turing formed an important friendship with fellow pupil Christopher Morcom, which provided inspiration in Turing's future endeavours. However, the friendship was cut short by Morcom's death in February 1930 from complications of bovine tuberculosis contracted after drinking infected cow's milk some years previously. This event shattered Turing's religious faith. He became an atheist and adopted the conviction that all phenomena, including the workings of the human brain, must be materialistic, but he still believed in the survival of the spirit after death.
University and work on computability.
After Sherborne, Turing studied as an undergraduate from 1931 to 1934 at King's College, Cambridge, from where he gained first-class honours in mathematics. In 1935, at the young age of 22, he was elected a fellow at King's on the strength of a dissertation in which he proved the central limit theorem, despite the fact that he had failed to find out that it had already been proved in 1922 by Jarl Waldemar Lindeberg.
In 1928, German mathematician David Hilbert had called attention to the 'Entscheidungsproblem' (decision problem). In his momentous paper 'On Computable Numbers, with an Application to the 'Entscheidungsproblem' (submitted on 28 May 1936 and delivered 12 November), Turing reformulated Kurt Gödel's 1931 results on the limits of proof and computation, replacing Gödel's universal arithmetic-based formal language with the formal and simple hypothetical devices that became known as Turing machines. He proved that some such machine would be capable of performing any conceivable mathematical computation if it were representable as an algorithm. He went on to prove that there was no solution to the 'Entscheidungsproblem' by first showing that the halting problem for Turing machines is undecidable: in general, it is not possible to decide algorithmically whether a given Turing machine will ever halt.
Although Turing's proof was published shortly after Alonzo Church's equivalent proof using his lambda calculus, Turing had been unaware of Church's work. Turing's approach is considerably more accessible and intuitive than Church's. It was also novel in its notion of a 'Universal Machine' (now known as a Universal Turing machine), with the idea that such a machine could perform the tasks of any other computation machine, or in other words, it is provably capable of computing anything that is computable. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation.
From September 1936 to July 1938, he spent most of his time studying under Church at Princeton University. In addition to his purely mathematical work, he studied cryptology and also built three of four stages of an electro-mechanical binary multiplier. In June 1938, he obtained his PhD from Princeton; his dissertation, 'Systems of Logic Based on Ordinals', introduced the concept of ordinal logic and the notion of relative computing, where Turing machines are augmented with so-called oracles, allowing a study of problems that cannot be solved by a Turing machine.
When Turing returned to Cambridge, he attended lectures given by Ludwig Wittgenstein about the foundations of mathematics. The two argued and disagreed, with Turing defending formalism and Wittgenstein propounding his view that mathematics does not discover any absolute truths but rather invents them. He also started to work part-time with the Government Code and Cypher School (GC&CS).
Cryptanalysis.
During the Second World War, Turing was a leading participant in the breaking of German ciphers at Bletchley Park. The historian and wartime codebreaker Asa Briggs has said, 'You needed exceptional talent, you needed genius at Bletchley and Turing's was that genius.'
From September 1938, Turing had been working part-time with the GC&CS, the British code breaking organisation. He concentrated on cryptanalysis of the Enigma, with Dilly Knox, a senior GC&CS codebreaker. Soon after the July 1939 Warsaw meeting at which the Polish Cipher Bureau had provided the British and French with the details of the wiring of Enigma rotors and their method of decrypting Enigma messages, Turing and Knox started to work on a less fragile approach to the problem. The Polish method relied on an insecure indicator procedure that the Germans were likely to change, which they did in May 1940. Turing's approach was more general, using crib-based decryption for which he produced the functional specification of the bombe (an improvement of the Polish Bomba).
On 4 September 1939, the day after the UK declared war on Germany, Turing reported to Bletchley Park, the wartime station of GC&CS.
Specifying the bombe was the first of five major cryptanalytical advances that Turing made during the war. The others were: deducing the indicator procedure used by the German navy; developing a statistical procedure for making much more efficient use of the bombes dubbed 'Banburismus'; developing a procedure for working out the cam settings of the wheels of the Lorenz SZ 40/42 ('Tunny') dubbed 'Turingery' and, towards the end of the war, the development of a portable secure voice scrambler at Hanslope Park that was codenamed 'Delilah'.
By using statistical techniques to optimise the trial of different possibilities in the code breaking process, Turing made an innovative contribution to the subject. He wrote two papers discussing mathematical approaches which were entitled 'Report on the applications of probability to cryptography' and 'Paper on statistics of repetitions', which were of such value to GC&CS and its successor GCHQ, that they were not released to the UK National Archives until April 2012, shortly before the centenary of his birth. A GCHQ mathematician said at the time that the fact that the contents had been restricted for some 70 years demonstrated their importance.
Turing had something of a reputation for eccentricity at Bletchley Park. He was known to his colleagues as 'Prof' and his treatise on Enigma was known as 'The Prof's Book'. Jack Good, a cryptanalyst who worked with him, is quoted by Ronald Lewin as having said of Turing:
In the first week of June each year he would get a bad attack of hay fever, and he would cycle to the office wearing a service gas mask to keep the pollen off. His bicycle had a fault: the chain would come off at regular intervals. Instead of having it mended he would count the number of times the pedals went round and would get off the bicycle in time to adjust the chain by hand. Another of his eccentricities is that he chained his mug to the radiator pipes to prevent it being stolen.
While working at Bletchley, Turing, a talented long-distance runner, occasionally ran the to London when he was needed for high-level meetings, and he was capable of world-class marathon standards. Turing tried out for the 1948 British Olympic team, hampered by an injury. His tryout time for the marathon was only 11 minutes slower than British silver medalist Thomas Richards' olympic race time of 2 hours 35 minutes. He was the best runner for the Walton Athletic Club, discovered when he passed the group while running alone.
In 1945, Turing was awarded the OBE by King George VI for his wartime services, but his work remained secret for many years.
Turing–Welchman bombe.
Within weeks of arriving at Bletchley Park, Turing had specified an electromechanical machine that could help break Enigma more effectively than the Polish 'bomba kryptologiczna', from which its name was derived. The bombe, with an enhancement suggested by mathematician Gordon Welchman, became one of the primary tools, and the major automated one, used to attack Enigma-enciphered messages.
Jack Good opined:
Turing's most important contribution, I 'think', was of part of the design of the bombe, the cryptanalytic machine. He had the idea that you could use, in effect, a theorem in logic which sounds to the untrained ear rather absurd; namely that from a contradiction, you can deduce 'everything.'
The bombe searched for possible correct settings used for an Enigma message (i.e. rotor order, rotor settings and plugboard settings), using a suitable 'crib': a fragment of probable plaintext. For each possible setting of the rotors (which had of the order of 1019 states, or 1022 for the four-rotor U-boat variant), the bombe performed a chain of logical deductions based on the crib, implemented electrically. The bombe detected when a contradiction had occurred, and ruled out that setting, moving on to the next. Most of the possible settings would cause contradictions and be discarded, leaving only a few to be investigated in detail. The first bombe was installed on 18 March 1940.
By late 1941, Turing and his fellow cryptanalysts Gordon Welchman, Hugh Alexander, and Stuart Milner-Barry were frustrated. Building on the brilliant work of the Poles, they had set up a good working system for decrypting Enigma signals but they only had a few people and a few bombes so they did not have time to translate all the signals. In the summer they had had considerable success and shipping losses had fallen to under 100,000 tons a month but they were still on a knife-edge. They badly needed more resources to keep abreast of German adjustments. They had tried to get more people and fund more bombes through the proper channels but they were getting nowhere. Finally, breaking all the rules, on 28 October they wrote directly to Churchill spelling out their difficulties. They emphasised how small their need was compared with the vast expenditure of men and money by the forces and compared with the level of assistance they could offer to the forces.
The effect was electric. Churchill wrote a memo to General Ismay which read: 'ACTION THIS DAY. Make sure they have all they want on extreme priority and report to me that this has been done.' On 18 November the chief of the secret service reported that every possible measure was being taken. More than two hundred bombes were in operation by the end of the war.
Hut 8 and Naval Enigma.
Turing decided to tackle the particularly difficult problem of German naval Enigma 'because no one else was doing anything about it and I could have it to myself'. In December 1939, Turing solved the essential part of the naval indicator system, which was more complex than the indicator systems used by the other services. That same night he also conceived of the idea of 'Banburismus', a sequential statistical technique (what Abraham Wald later called sequential analysis) to assist in breaking naval Enigma, 'though I was not sure that it would work in practice, and was not in fact sure until some days had actually broken'. For this he invented a measure of weight of evidence that he called the 'ban'. Banburismus could rule out certain sequences of the Enigma rotors, substantially reducing the time needed to test settings on the bombes.
In 1941, Turing proposed marriage to Hut 8 co-worker Joan Clarke, a fellow mathematician and cryptanalyst, but their engagement was short-lived. After admitting his homosexuality to his fiancée, who was reportedly 'unfazed' by the revelation, Turing decided that he could not go through with the marriage.
Turing travelled to the United States in November 1942 and worked with US Navy cryptanalysts on Naval Enigma and bombe construction in Washington. He visited their Computing Machine Laboratory at Dayton, Ohio. His reaction to the American Bombe design was far from enthusiastic:
It seems a pity for them to go out of their way to build a machine to do all this stopping if it is not necessary. I am now converted to the extent of thinking that starting from scratch on the design of a Bombe, this method is about as good as our own. The American Bombe program was to produce 336 Bombes, one for each wheel order. I used to smile inwardly at the conception of test (of commutators) can hardly be considered conclusive as they were not testing for the bounce with electronic stop finding devices.During this trip, he also assisted at Bell Labs with the development of secure speech devices.
He returned to Bletchley Park in March 1943. During his absence, Hugh Alexander had officially assumed the position of head of Hut 8, although Alexander had been 'de facto' head for some time—Turing having little interest in the day-to-day running of the section. Turing became a general consultant for cryptanalysis at Bletchley Park.
Alexander wrote as follows about his contribution:
There should be no question in anyone's mind that Turing's work was the biggest factor in Hut 8's success. In the early days he was the only cryptographer who thought the problem worth tackling and not only was he primarily responsible for the main theoretical work within the Hut but he also shared with Welchman and Keen the chief credit for the invention of the Bombe. It is always difficult to say that anyone is absolutely indispensable but if anyone was indispensable to Hut 8 it was Turing. The pioneer's work always tends to be forgotten when experience and routine later make everything seem easy and many of us in Hut 8 felt that the magnitude of Turing's contribution was never fully realised by the outside world.
Turingery.
In July 1942, Turing devised a technique termed 'Turingery' (or jokingly 'Turingismus') for use against the Lorenz cipher messages produced by the Germans' new 'Geheimschreiber' (secret writer) machine. This was a teleprinter rotor cipher attachment codenamed 'Tunny' at Bletchley Park. Turingery was a method of 'wheel-breaking', i.e. a procedure for working out the cam settings of Tunny's wheels. He also introduced the Tunny team to Tommy Flowers who, under the guidance of Max Newman, went on to build the Colossus computer, the world's first programmable digital electronic computer, which replaced a simpler prior machine (the Heath Robinson), and whose superior speed allowed the statistical decryption techniques to be applied usefully to the messages. Some have mistakenly said that Turing was a key figure in the design of the Colossus computer. Turingery and the statistical approach of Banburismus undoubtedly fed into the thinking about cryptanalysis of the Lorenz cipher, but he was not directly involved in the Colossus development.
Secure speech device (Delilah).
Following his work at Bell Labs in the US, Turing pursued the idea of electronic enciphering of speech in the telephone system, and in the latter part of the war, he moved to work for the Secret Service's Radio Security Service (later HMGCC) at Hanslope Park. There he further developed his knowledge of electronics with the assistance of engineer Donald Bayley. Together they undertook the design and construction of a portable secure voice communications machine codenamed 'Delilah'. It was intended for different applications, lacking capability for use with long-distance radio transmissions, and in any case, Delilah was completed too late to be used during the war. Though the system worked fully, with Turing demonstrating it to officials by encrypting and decrypting a recording of a Winston Churchill speech, Delilah was not adopted for use.
Turing also consulted with Bell Labs on the development of SIGSALY, a secure voice system that was used in the later years of the war.
Early computers and the Turing test.
From 1945 to 1947, Turing lived in Richmond, London while he worked on the design of the ACE (Automatic Computing Engine) at the National Physical Laboratory (NPL). He presented a paper on 19 February 1946, which was the first detailed design of a stored-program computer. Von Neumann's incomplete 'First Draft of a Report on the EDVAC' had predated Turing's paper, but it was much less detailed and, according to John R. Womersley, Superintendent of the NPL Mathematics Division, it 'contains a number of ideas which are Dr. Turing's own'. Although ACE was a feasible design, the secrecy surrounding the wartime work at Bletchley Park led to delays in starting the project and he became disillusioned. In late 1947 he returned to Cambridge for a sabbatical year during which he produced a seminal work on 'Intelligent Machinery' that was not published in his lifetime. While he was at Cambridge, the Pilot ACE was being built in his absence. It executed its first program on 10 May 1950. Although the full version of Turing's ACE was never built, a number of computers around the world owe much to it, for example, the English Electric DEUCE and the American Bendix G-15.
According to the memoirs of the German computer pioneer Heinz Billing from the Max Planck Institute for Physics, published by Genscher, Düsseldorf (1997), there was a meeting between Alan Turing and Konrad Zuse. It took place in Göttingen in 1947. The interrogation had the form of a colloquium. Participants were Womersley, Turing, Porter from England and a few German researchers like Zuse, Walther, and Billing. (For more details see Herbert Bruderer, 'Konrad Zuse und die Schweiz').
In 1948, he was appointed Reader in the Mathematics Department at the University of Manchester. In 1949, he became Deputy Director of the Computing Laboratory there, working on software for one of the earliest stored-program computers—the Manchester Mark 1. During this time he continued to do more abstract work in mathematics, and in 'Computing machinery and intelligence' ('Mind', October 1950), Turing addressed the problem of artificial intelligence, and proposed an experiment which became known as the Turing test, an attempt to define a standard for a machine to be called 'intelligent'. The idea was that a computer could be said to 'think' if a human interrogator could not tell it apart, through conversation, from a human being. In the paper, Turing suggested that rather than building a program to simulate the adult mind, it would be better rather to produce a simpler one to simulate a child's mind and then to subject it to a course of education. A reversed form of the Turing test is widely used on the Internet; the CAPTCHA test is intended to determine whether the user is a human or a computer.
In 1948, Turing, working with his former undergraduate colleague, D. G. Champernowne, began writing a chess program for a computer that did not yet exist. By 1950, the program was completed and dubbed the Turbochamp. In 1952, he tried to implement it on a Ferranti Mark 1, but lacking enough power, the computer was unable to execute the program. Instead, Turing played a game in which he simulated the computer, taking about half an hour per move. The game was recorded. The program lost to Turing's colleague Alick Glennie, although it is said that it won a game against Champernowne's wife.
His Turing test was a significant, characteristically provocative and lasting contribution to the debate regarding artificial intelligence, which continues after more than half a century.
He also invented the LU decomposition method in 1948, used today for solving matrix equations.
Pattern formation and mathematical biology.
Turing worked from 1952 until his death in 1954 on mathematical biology, specifically morphogenesis. He published one paper on the subject called 'The Chemical Basis of Morphogenesis' in 1952, putting forth the Turing hypothesis of pattern formation (the theory was experimentally confirmed 60 years after his death ). His central interest in the field was understanding Fibonacci phyllotaxis, the existence of Fibonacci numbers in plant structures. He used reaction–diffusion equations which are central to the field of pattern formation. Later papers went unpublished until 1992 when 'Collected Works of A.M. Turing' was published. His contribution is considered a seminal piece of work in this field. Removal of 'Hox' genes causes an increased number of digits (up to 14) in mice, demonstrating a Turing-type mechanism in the development of the hand.
Conviction for indecency.
In January 1952, Turing, then 39, started a relationship with Arnold Murray, a 19-year-old unemployed man. Turing met Murray just before Christmas outside the Regal Cinema when walking down Manchester's Oxford Road and had invited him to lunch. On 23 January Turing's house was burgled. Murray told Turing that the burglar was an acquaintance of his, and Turing reported the crime to the police. During the investigation he acknowledged a sexual relationship with Murray. Homosexual acts were criminal offences in the United Kingdom at that time, and both men were charged with gross indecency under Section 11 of the Criminal Law Amendment Act 1885. Initial committal proceedings for the trial occurred on 27 February, where Turing's solicitor 'reserved his defence'.
Later, convinced by the advice of his brother and other lawyers, Turing entered a plea of 'guilty', in spite of the fact that he felt no remorse or guilt for having committed acts of homosexuality. The case, 'Regina v. Turing and Murray,' was brought to trial on 31 March 1952, when Turing was convicted and given a choice between imprisonment and probation, which would be conditional on his agreement to undergo hormonal treatment designed to reduce libido. He accepted the option of treatment via injections of stilboestrol, a synthetic oestrogen; this treatment was continued for the course of one year. The treatment rendered Turing impotent and caused gynaecomastia, fulfilling in the literal sense, Turing's prediction that 'no doubt I shall emerge from it all a different man, but quite who I've not found out'. Murray was given a conditional discharge.
Turing's conviction led to the removal of his security clearance and barred him from continuing with his cryptographic consultancy for the Government Communications Headquarters (GCHQ), the British signals intelligence agency that had evolved from GC&CS in 1946 (though he kept his academic job). He was denied entry into the United States after his conviction in 1952, but was free to visit other European countries, even though this was viewed by some as a security risk. At the time, there was acute public anxiety about homosexual entrapment of spies by Soviet agents, because of the recent exposure of the first two members of the Cambridge Five, Guy Burgess and Donald Maclean, as KGB double agents. Turing was never accused of espionage but, in common with all who had worked at Bletchley Park, he was prevented by the Official Secrets Act from discussing his war work.
Death.
On 8 June 1954, Turing's cleaner found him dead. He had died the previous day. A post-mortem examination established that the cause of death was cyanide poisoning. When his body was discovered, an apple lay half-eaten beside his bed, and although the apple was not tested for cyanide, it was speculated that this was the means by which a fatal dose was consumed. An inquest determined that he had committed suicide, and he was cremated at Woking Crematorium on 12 June 1954. Turing's ashes were scattered there, just as his father's had been.
Philosophy professor Jack Copeland has questioned various aspects of the coroner's historical verdict, suggesting the alternative explanation of the accidental inhalation of cyanide fumes from an apparatus for gold electroplating spoons, using potassium cyanide to dissolve the gold, which Turing had set up in his tiny spare room. Copeland notes that the autopsy findings were more consistent with inhalation than with ingestion of the poison. Turing also habitually ate an apple before bed, and it was not unusual for it to be discarded half-eaten. In addition, Turing had reportedly borne his legal setbacks and hormone treatment (which had been discontinued a year previously) 'with good humour' and had shown no sign of despondency prior to his death, in fact, setting down a list of tasks he intended to complete upon return to his office after the holiday weekend. At the time, Turing's mother believed that the ingestion was accidental, caused by her son's careless storage of laboratory chemicals. Biographer Andrew Hodges suggests that Turing may have arranged the cyanide experiment deliberately, to give his mother some plausible deniability.
Turing's biographers Andrew Hodges and David Leavitt have suggested that Turing was re-enacting a scene from the 1937 Walt Disney film 'Snow White', his favourite fairy tale, both noting that (in Leavitt's words) he took 'an especially keen pleasure in the scene where the Wicked Queen immerses her apple in the poisonous brew'.
Recognition and tributes.
A biography published by the Royal Society shortly after Turing's death, while his wartime work was still subject to the Official Secrets Act, recorded:
Three remarkable papers written just before the war, on three diverse mathematical subjects, show the quality of the work that might have been produced if he had settled down to work on some big problem at that critical time. For his work at the Foreign Office he was awarded the OBE.
Since 1966, the Turing Award has been given annually by the Association for Computing Machinery for technical or theoretical contributions to the computing community. It is widely considered to be the computing world's highest honour, equivalent to the Nobel Prize.
'Breaking the Code' is a 1986 play by Hugh Whitemore about Alan Turing. The play ran in London's West End beginning in November 1986 and on Broadway from 15 November 1987 to 10 April 1988. There was also a 1996 BBC television production (broadcast in the United States by PBS). In all three performances Turing was played by Derek Jacobi. The Broadway production was nominated for three Tony Awards including Best Actor in a Play, Best Featured Actor in a Play, and Best Direction of a Play, and for two Drama Desk Awards, for Best Actor and Best Featured Actor.
On 23 June 1998, on what would have been Turing's 86th birthday, his biographer, Andrew Hodges, unveiled an official English Heritage blue plaque at his birthplace and childhood home in Warrington Crescent, London, later the Colonnade Hotel.
To mark the 50th anniversary of his death, a memorial plaque was unveiled on 7 June 2004 at his former residence, Hollymeade, in Wilmslow, Cheshire.
On 13 March 2000, Saint Vincent and the Grenadines issued a set of postage stamps to celebrate the greatest achievements of the 20th century, one of which carries a portrait of Turing against a background of repeated 0s and 1s, and is captioned: '1937: Alan Turing's theory of digital computing'. On 1 April 2003, Turing's work at Bletchley Park was named an IEEE Milestone. On 28 October 2004, a bronze statue of Alan Turing sculpted by John W. Mills was unveiled at the University of Surrey in Guildford, marking the 50th anniversary of Turing's death; it portrays him carrying his books across the campus. In 2006, Boston Pride named Turing their Honorary Grand Marshal.
Turing was one of four mathematicians examined in the 2008 BBC documentary entitled 'Dangerous Knowledge'. The Princeton Alumni Weekly named Turing the second most significant alumnus in the history of Princeton University, second only to President James Madison. A 1.5-ton, life-size statue of Turing was unveiled on 19 June 2007 at Bletchley Park. Built from approximately half a million pieces of Welsh slate, it was sculpted by Stephen Kettle, having been commissioned by the late American billionaire Sidney Frank.
Turing has been honoured in various ways in Manchester, the city where he worked towards the end of his life. In 1994, a stretch of the A6010 road (the Manchester city intermediate ring road) was named 'Alan Turing Way'. A bridge carrying this road was widened, and carries the name Alan Turing Bridge. A statue of Turing was unveiled in Manchester on 23 June 2001 in Sackville Park, between the University of Manchester building on Whitworth Street and the Canal Street gay village. The memorial statue, depicts the 'father of Computer Science' sitting on a bench at a central position in the park.
Turing is shown holding an apple—a symbol classically used to represent forbidden knowledge, the object that inspired Isaac Newton's theory of gravitation, and the assumed means of Turing's own death. The cast bronze bench carries in relief the text 'Alan Mathison Turing 1912–1954', and the motto 'Founder of Computer Science' as it could appear if encoded by an Enigma machine: 'IEKYF ROMSI ADXUO KVKZC GUBJ'.
A plinth at the statue's feet says 'Father of computer science, mathematician, logician, wartime codebreaker, victim of prejudice'. There is also a Bertrand Russell quotation saying 'Mathematics, rightly viewed, possesses not only truth, but supreme beauty—a beauty cold and austere, like that of sculpture.' The sculptor buried his old Amstrad computer, which was an early popular home computer, under the plinth, as a tribute to 'the godfather of all modern computers'.
In 1999, 'Time Magazine' named Turing as one of the and stated: 'The fact remains that everyone who taps at a keyboard, opening a spreadsheet or a word-processing program, is working on an incarnation of a Turing machine.' Turing is featured in the 1999 Neal Stephenson novel 'Cryptonomicon'.
In 2002, a new building named after Alan Turing was constructed on the Malvern site of QinetiQ. It houses about 200 scientists and engineers, some of whom work on big data computing.
In 2002, Turing was ranked twenty-first on the BBC nationwide poll of the 100 Greatest Britons. In 2006 British writer and mathematician Ioan James chose Turing as one of twenty people to feature in his book about famous historical figures who may have had some of the traits of Asperger syndrome. In 2010, actor/playwright Jade Esteban Estrada portrayed Turing in the solo musical, 'ICONS: The Lesbian and Gay History of the World, Vol. 4'. In 2011, in 'The Guardian's' 'My hero' series, writer Alan Garner chose Turing as his hero and described how they had met whilst out jogging in the early 1950s. Garner remembered Turing as 'funny and witty' and said that he 'talked endlessly'.
In 2006, Alan Turing was named with online resources as an LGBT History Month Icon.
In February 2011, Turing's papers from the Second World War were bought for the nation with an 11th-hour bid by the National Heritage Memorial Fund, allowing them to stay at Bletchley Park.
In November 2011, Channel 4 aired the docudrama 'Britain's Greatest Codebreaker' about the life of Turing.
The logo of Apple Computer is often erroneously referred to as a tribute to Alan Turing, with the bite mark a reference to his death. Both the designer of the logo and the company deny that there is any homage to Turing in the design of the logo. Stephen Fry has recounted asking Steve Jobs whether the design was intentional, saying that Jobs' response was, 'God, we wish it were.'
The Turing Rainbow Festival, held in Madurai, India in 2012 for celebrating the LGBT and Genderqueer cause, was named in honour of Alan Turing by Gopi Shankar of Srishti Madurai.
The francophone singer-songwriter Salvatore Adamo makes a tribute to Turing with his song 'Alan et la Pomme'.
Alan Turing's life and work featured in a BBC children's programme about famous scientists, first broadcast on 12 March 2014. There was, however, no reference to Turing's sexuality.
On 26 April 2014, a major choral work written by James McCarthy depicting the life of Alan Turing is premiered in the Barbican hall, London, by the Hertfordshire Chorus.
Government apology and pardon.
In August 2009, John Graham-Cumming started a petition urging the British Government to apologise for Turing's prosecution as a homosexual. The petition received thousands of signatures. Prime Minister Gordon Brown acknowledged the petition, releasing a statement on 10 September 2009 apologising and describing the treatment of Turing as 'appalling':
Thousands of people have come together to demand justice for Alan Turing and recognition of the appalling way he was treated. While Turing was dealt with under the law of the time and we can't put the clock back, his treatment was of course utterly unfair and I am pleased to have the chance to say how deeply sorry I and we all are for what happened to him .. So on behalf of the British government, and all those who live freely thanks to Alan's work I am very proud to say: we're sorry, you deserved so much better.
In December 2011, William Jones created an e-petition requesting the British Government pardon Turing for his conviction of 'gross-indecency':
We ask the HM Government to grant a pardon to Alan Turing for the conviction of 'gross indecency'. In 1952, he was convicted of 'gross indecency' with another man and was forced to undergo so-called 'organo-therapy' – chemical castration. Two years later, he killed himself with cyanide, aged just 41. Alan Turing was driven to a terrible despair and early death by the nation he'd done so much to save. This remains a shame on the UK government and UK history. A pardon can go to some way to healing this damage. It may act as an apology to many of the other gay men, not as well-known as Alan Turing, who were subjected to these laws.
The petition gathered over 37,000 signatures, but the request was discouraged by Lord McNally, who gave the following opinion in his role as the Justice Minister:
A posthumous pardon was not considered appropriate as Alan Turing was properly convicted of what at the time was a criminal offence. He would have known that his offence was against the law and that he would be prosecuted.
It is tragic that Alan Turing was convicted of an offence which now seems both cruel and absurd—particularly poignant given his outstanding contribution to the war effort. However, the law at the time required a prosecution and, as such, long-standing policy has been to accept that such convictions took place and, rather than trying to alter the historical context and to put right what cannot be put right, ensure instead that we never again return to those times.
On 26 July 2012, a bill was introduced in the House of Lords to grant a statutory pardon to Turing for offences under section 11 of the Criminal Law Amendment Act 1885, of which he was convicted on 31 March 1952. Late in the year in a letter to 'The Daily Telegraph', the physicist Stephen Hawking and 10 other signatories including the Astronomer Royal Lord Rees, President of the Royal Society Sir Paul Nurse, Lady Trumpington (who worked for Turing during the war), and Lord Sharkey (the bill's sponsor) called on Prime Minister David Cameron to act on the pardon request. The Government indicated it would support the bill, and it passed its third reading in the Lords in October.
Before the bill could be debated in the House of Commons, the Government elected to proceed under the royal prerogative of mercy. On 24 December 2013, Queen Elizabeth II signed a pardon for Turing's conviction for gross indecency, with immediate effect. Announcing the pardon, Justice Secretary Chris Grayling said Turing deserved to be 'remembered and recognised for his fantastic contribution to the war effort' and not for his later criminal conviction. The Queen then officially pronounced Turing pardoned in August 2014. The Queen's action is only the fourth royal pardon granted since the conclusion of World War II. This case is unusual in that pardons are normally granted only when the person is technically innocent, and a request has been made by the family or other interested party. Neither condition was met in regard to Turing's conviction.
In a letter to Prime Minister David Cameron after announcement of the pardon, human rights advocate Peter Tatchell criticised the decision to single out Turing due to his fame and achievements, when thousands of others convicted under the same law have not received pardons. Tatchell also called for a new investigation into Turing's death:
A new inquiry is long overdue, even if only to dispel any doubts about the true cause of his death – including speculation that he was murdered by the security services (or others). I think murder by state agents is unlikely. There is no known evidence pointing to any such act. However, it is a major failing that this possibility has never been considered or investigated.
Centenary celebrations.
To mark the 100th anniversary of Turing's birth, the Turing Centenary Advisory Committee (TCAC) co-ordinated the Alan Turing Year, a year-long programme of events around the world honouring Turing's life and achievements. The TCAC, chaired by S. Barry Cooper with Alan Turing's nephew Sir John Dermot Turing acting as Honorary President, worked with the University of Manchester faculty members and a broad spectrum of people from Cambridge University and Bletchley Park.
On 23 June 2012, Google featured an interactive doodle where visitors had to change the instructions of a Turing Machine, so when run, the symbols on the tape would match a provided sequence, featuring 'Google' in Baudot-Murray code.
The Bletchley Park Trust collaborated with Winning Moves to publish an Alan Turing edition of the board game Monopoly. The game's squares and cards have been revised to tell the story of Alan Turing's life, from his birthplace in Maida Vale to Hut 8 at Bletchley Park. The game also includes a replica of an original hand-drawn board created by William Newman, son of Turing's mentor, Max Newman, which Turing played on in the 1950s.
In the Philippines, the Department of Philosophy at De La Salle University-Manila hosted Turing 2012, an international conference on philosophy, artificial intelligence, and cognitive science from 27 to 28 March 2012 to commemorate the centenary birth of Turing. Madurai, India held celebrations, in conjunction with Asia's first Gay Pride festival, with a programme attended by 6000 students.
UK celebrations.
There was a three-day conference in Manchester, UK in June, a two-day conference in San Francisco, organised by the ACM, and a birthday party and Turing Centenary Conference in Cambridge organised at King's College, Cambridge and the University of Cambridge, the latter organised by the association Computability in Europe.
The Science Museum in London launched a free exhibition devoted to Turing's life and achievements in June 2012, to run until July 2013. In February 2012, the Royal Mail issued a stamp featuring Turing as part of its 'Britons of Distinction' series. The London 2012 Olympic Torch flame was passed on in front of Turing's statue in Sackville Gardens, Manchester, on the evening of 23 June 2012, the 100th anniversary of his birth.
On 22 June 2012 Manchester City Council, in partnership with the Lesbian and Gay Foundation, launched the Alan Turing Memorial Award which will recognise individuals or groups who have made a significant contribution to the fight against homophobia in Manchester.
At the University of Oxford, a new course in Computer Science and Philosophy was established to coincide with the centenary of Turing's birth.
Previous events have included a celebration of Turing's life and achievements, at the University of Manchester, arranged by the British Logic Colloquium and the British Society for the History of Mathematics on 5 June 2004.
Portrayal in adaptations.
Turing was portrayed by Derek Jacobi in the 1996 television movie 'Breaking the Code'. The drama-documentary 'Codebreaker', about Turing's life, was aired by UK's Channel 4 in 2011 and was released in the US in October 2012. The film features Ed Stoppard as Turing and Henry Goodman as Franz Greenbaum.
A musical work inspired by Turing's life, written by Neil Tennant and Chris Lowe of the Pet Shop Boys, entitled 'A Man from the Future', was announced in late 2013. 'A Man from the Future' was performed by the Pet Shop Boys and Juliet Stevenson (narrator), the BBC Singers, and the BBC Concert Orchestra conducted by Dominic Wheeler at the BBC Proms in Royal Albert Hall on 23 July 2014.
'Codebreaker', a choral work written by James McCarthy to the settings of texts by poets Wilfred Owen, Sara Teasdale, Walt Whitman, Oscar Wilde and Robert Burns, received its World Premier on 26 April 2014 in the Barbican Hall in London. It was performed by the Hertfordshire Chorus, who commissioned the work, led by David Temple with a soprano soloist (sung by Naomi Harvey) providing the voice of Turing's mother.
The historical drama 'The Imitation Game', directed by Morten Tyldum and starring Benedict Cumberbatch as Turing, is set for cinematic release in 2014.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1209'>
Area
Area is the quantity that expresses the extent of a two-dimensional figure or shape, or planar lamina, in the plane. Surface area is its analog on the two-dimensional surface of a three-dimensional object. Area can be understood as the amount of material with a given thickness that would be necessary to fashion a model of the shape, or the amount of paint necessary to cover the surface with a single coat. It is the two-dimensional analog of the length of a curve (a one-dimensional concept) or the volume of a solid (a three-dimensional concept).
The area of a shape can be measured by comparing the shape to squares of a fixed size. In the International System of Units (SI), the standard unit of area is the square metre (written as m2), which is the area of a square whose sides are one metre long. A shape with an area of three square metres would have the same area as three such squares. In mathematics, the unit square is defined to have area one, and the area of any other shape or surface is a dimensionless real number.
There are several well-known formulas for the areas of simple shapes such as triangles, rectangles, and circles. Using these formulas, the area of any polygon can be found by dividing the polygon into triangles. For shapes with curved boundary, calculus is usually required to compute the area. Indeed, the problem of determining the area of plane figures was a major motivation for the historical development of calculus.
For a solid shape such as a sphere, cone, or cylinder, the area of its boundary surface is called the surface area. Formulas for the surface areas of simple shapes were computed by the ancient Greeks, but computing the surface area of a more complicated shape usually requires multivariable calculus.
Area plays an important role in modern mathematics. In addition to its obvious importance in geometry and calculus, area is related to the definition of determinants in linear algebra, and is a basic property of surfaces in differential geometry. In analysis, the area of a subset of the plane is defined using Lebesgue measure, though not every subset is measurable. In general, area in higher mathematics is seen as a special case of volume for two-dimensional regions.
Area can be defined through the use of axioms, defining it as a function of a collection of certain plane figures to the set of real numbers. It can be proved that such a function exists.
Formal definition.
An approach to defining what is meant by 'area' is through axioms. 'Area' can be defined as a function from a collection M of special kind of plane figures (termed measurable sets) to the set of real numbers which satisfies the following properties:
It can be proved that such an area function actually exists.
Units.
Every unit of length has a corresponding unit of area, namely the area of a square with the given side length. Thus areas can be measured in square metres (m2), square centimetres (cm2), square millimetres (mm2), square kilometres (km2), square feet (ft2), square yards (yd2), square miles (mi2), and so forth. Algebraically, these units can be thought of as the squares of the corresponding length units.
The SI unit of area is the square metre, which is considered an SI derived unit.
Conversions.
The conversion between two square units is the square of the conversion between the corresponding length units. For example, since
the relationship between square feet and square inches is
where 144 = 122 = 12 × 12. Similarly:
In addition,
Other units.
There are several other common units for area. The 'Are' was the original unit of area in the metric system, with;
Though the are has fallen out of use, the hectare is still commonly used to measure land:
Other uncommon metric units of area include the tetrad, the hectad, and the myriad.
The acre is also commonly used to measure land areas, where
An acre is approximately 40% of a hectare.
On the atomic scale, area is measured in units of barns, such that:
The barn is commonly used in describing the cross sectional area of interaction in nuclear physics.
In India,
History.
Circle area.
In the fifth century BCE, Hippocrates of Chios was the first to show that the area of a disk (the region enclosed by a circle) is proportional to the square of its diameter, as part of his quadrature of the lune of Hippocrates, but did not identify the constant of proportionality. Eudoxus of Cnidus, also in the fifth century BCE, also found that the area of a disk is proportional to its radius squared.
Subsequently, Book I of Euclid's 'Elements' dealt with equality of areas between two-dimensional figures. The mathematician Archimedes used the tools of Euclidean geometry to show that the area inside a circle is equal to that of a right triangle whose base has the length of the circle's circumference and whose height equals the circle's radius, in his book 'Measurement of a Circle'. (The circumference is 2'r', and the area of a triangle is half the base times the height, yielding the area 'r'2 for the disk.) Archimedes approximated the value of π (and hence the area of a unit-radius circle) with his doubling method, in which he inscribed a regular triangle in a circle and noted its area, then doubled the number of sides to give a regular hexagon, then repeatedly doubled the number of sides as the polygon's area got closer and closer to that of the circle (and did the same with circumscribed polygons).
Swiss scientist Johann Heinrich Lambert in 1761 proved that π, the ratio of a circle's area to its squared radius, is irrational, meaning it is not equal to the quotient of any two whole numbers. French mathematician Adrien-Marie Legendre proved in 1794 that π2 is also irrational. In 1882, German mathematician Ferdinand von Lindemann proved that π is transcendental (not the solution of any polynomial equation with rational coefficients), confirming a conjecture made by both Legendre and Euler.
Triangle area.
Heron (or Hero) of Alexandria found what is known as Heron's formula for the area of a triangle in terms of its sides, and a proof can be found in his book, 'Metrica', written around 60 CE. It has been suggested that Archimedes knew the formula over two centuries earlier, and since 'Metrica' is a collection of the mathematical knowledge available in the ancient world, it is possible that the formula predates the reference given in that work.
In 499 Aryabhata, a great mathematician-astronomer from the classical age of Indian mathematics and Indian astronomy, expressed the area of a triangle as one-half the base times the height in the 'Aryabhatiya' (section 2.6).
A formula equivalent to Heron's was discovered by the Chinese independently of the Greeks. It was published in 1247 in 'Shushu Jiuzhang' (“Mathematical Treatise in Nine Sections”), written by Qin Jiushao.
Quadrilateral area.
In the 600s CE, Brahmagupta developed a formula, now known as Brahmagupta's formula, for the area of a cyclic quadrilateral (a quadrilateral inscribed in a circle) in terms of its sides. In 1842 the German mathematicians Carl Anton Bretschneider and Karl Georg Christian von Staudt independently found a formula, known as Bretschneider's formula, for the area of any quadrilateral.
General polygon area.
The development of Cartesian coordinates by René Descartes in the 1600s allowed the development of the surveyor's formula for the area of any polygon with known vertex locations by Gauss in the 1800s.
Areas determined using calculus.
The development of integral calculus in the late 1600s provided tools that could subsequently be used for computing more complicated areas, such as the area of an ellipse and the surface areas of various curved three-dimensional objects.
Area formulas.
Polygon formulas.
For a non-self-intersecting (simple) polygon, the Cartesian coordinates formula_1 ('i'=0, 1, .., 'n'-1) of whose 'n' vertices are known, the area is given by the surveyor's formula:
where when 'i'='n'-1, then 'i'+1 is expressed as modulus 'n' and so refers to 0.
Rectangles.
The most basic area formula is the formula for the area of a rectangle. Given a rectangle with length and width , the formula for the area is:
That is, the area of the rectangle is the length multiplied by the width. As a special case, as in the case of a square, the area of a square with side length is given by the formula:
The formula for the area of a rectangle follows directly from the basic properties of area, and is sometimes taken as a definition or axiom. On the other hand, if geometry is developed before arithmetic, this formula can be used to define multiplication of real numbers.
Dissection, parallelograms, and triangles.
Most other simple formulas for area follow from the method of dissection.
This involves cutting a shape into pieces, whose areas must sum to the area of the original shape.
For an example, any parallelogram can be subdivided into a trapezoid and a right triangle, as shown in figure to the left. If the triangle is moved to the other side of the trapezoid, then the resulting figure is a rectangle. It follows that the area of the parallelogram is the same as the area of the rectangle:
However, the same parallelogram can also be cut along a diagonal into two congruent triangles, as shown in the figure to the right. It follows that the area of each triangle is half the area of the parallelogram:
Similar arguments can be used to find area formulas for the trapezoid as well as more complicated polygons.
Area of curved shapes.
Circles.
The formula for the area of a circle (more properly called area of a disk) is based on a similar method. Given a circle of radius , it is possible to partition the circle into sectors, as shown in the figure to the right. Each sector is approximately triangular in shape, and the sectors can be rearranged to form and approximate parallelogram. The height of this parallelogram is , and the width is half the circumference of the circle, or . Thus, the total area of the circle is , or :
Though the dissection used in this formula is only approximate, the error becomes smaller and smaller as the circle is partitioned into more and more sectors. The limit of the areas of the approximate parallelograms is exactly , which is the area of the circle.
This argument is actually a simple application of the ideas of calculus. In ancient times, the method of exhaustion was used in a similar way to find the area of the circle, and this method is now recognized as a precursor to integral calculus. Using modern methods, the area of a circle can be computed using a definite integral:
Ellipses.
The formula for the area of an ellipse is related to the formula of a circle; for an ellipse with semi-major and semi-minor axes and the formula is:
Surface area.
Most basic formulas for surface area can be obtained by cutting surfaces and flattening them out. For example, if the side surface of a cylinder (or any prism) is cut lengthwise, the surface can be flattened out into a rectangle. Similarly, if a cut is made along the side of a cone, the side surface can be flattened out into a sector of a circle, and the resulting area computed.
The formula for the surface area of a sphere is more difficult to derive: because a sphere has nonzero Gaussian curvature, it cannot be flattened out. The formula for the surface area of a sphere was first obtained by Archimedes in his work 'On the Sphere and Cylinder'. The formula is:
where is the radius of the sphere. As with the formula for the area of a circle, any derivation of this formula inherently uses methods similar to calculus.
General formulas.
Area in calculus.
(see Green's theorem) or the 'z'-component of
Bounded area between two quadratic functions.
To find the bounded area between two quadratic functions, we subtract one from the other to write the difference as
where 'f'('x') is the quadratic upper bound and 'g'('x') is the quadratic lower bound. Define the discriminant of 'f'('x')-'g'('x') as
By simplifying the integral formula between the graphs of two functions (as given in the section above) and using Vieta's formula, we can obtain
The above remains valid if one of the bounding functions is linear instead of quadratic.
General formula for surface area.
The general formula for the surface area of the graph of a continuously differentiable function formula_35 where formula_36 and formula_37 is a region in the xy-plane with the smooth boundary:
An even more general formula for the area of the graph of a parametric surface in the vector form formula_39 where formula_40 is a continuously differentiable vector function of formula_41 is:
List of formulas.
The above calculations show how to find the areas of many common shapes.
The areas of irregular polygons can be calculated using the 'Surveyor's formula'.
Relation of area to perimeter.
The isoperimetric inequality states that, for a closed curve of length 'L' (so the region it encloses has perimeter 'L') and for area 'A' of the region that it encloses,
and equality holds if and only if the curve is a circle. Thus a circle has the largest area of any closed figure with a given perimeter.
At the other extreme, a figure with given perimeter 'L' could have an arbitrarily small area, as illustrated by a rhombus that is 'tipped over' arbitrarily far so that two of its angles are arbitrarily close to 0° and the other two are arbitrarily close to 180°.
For a circle, the ratio of the area to the circumference (the term for the perimeter of a circle) equals half the radius 'r'. This can be seen from the area formula 'πr'2 and the circumference formula 2'πr'.
The area of a regular polygon is half its perimeter times the apothem (where the apothem is the distance from the center to the nearest point on any side).
Fractals.
Doubling the edge lengths of a polygon multiplies its area by four, which is two (the ratio of the new to the old side length) raised to the power of two (the dimension of the space the polygon resides in). But if the one-dimensional lengths of a fractal drawn in two dimensions are all doubled, the spatial content of the fractal scales by a power of two that is not necessarily an integer. This power is called the fractal dimension of the fractal.
Area bisectors.
There are an infinitude of lines that bisect the area of a triangle. Three of them are the medians of the triangle (which connect the sides' midpoints with the opposite vertices), and these are concurrent at the triangle's centroid; indeed, they are the only area bisectors that go through the centroid. Any line through a triangle that splits both the triangle's area and its perimeter in half goes through the triangle's incenter (the center of its incircle). There are either one, two, or three of these for any given triangle.
Any line through the midpoint of a parallelogram bisects the area.
All area bisectors of a circle or other ellipse go through the center, and any chords through the center bisect the area. In the case of a circle they are the diameters of the circle.
Optimization.
Given a wire contour, the surface of least area spanning ('filling') it is a minimal surface. Familiar examples include soap bubbles.
The question of the filling area of the Riemannian circle remains open.
The circle has the largest area of any two-dimensional object having the same perimeter.
A cyclic polygon (one inscribed in a circle) has the largest area of any polygon with a given number of sides of the same lengths.
A version of the isoperimetric inequality for triangles states that the triangle of greatest area among all those with a given perimeter is equilateral.
The triangle of largest area of all those inscribed in a given circle is equilateral; and the triangle of smallest area of all those circumscribed around a given circle is equilateral.
The ratio of the area of the incircle to the area of an equilateral triangle, formula_44, is larger than that of any non-equilateral triangle.
The ratio of the area to the square of the perimeter of an equilateral triangle, formula_45 is larger than that for any other triangle.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1210'>
Astronomical unit
An astronomical unit (abbreviated au; sometimes AU, a.u. and ua) is a unit of length, roughly the distance from the Earth to the Sun. However, that distance varies as the Earth orbits the Sun, from a maximum (aphelion) to a minimum (perihelion) and back again once a year. Originally, each distance was measured through observation, and the au was defined as their average, half the sum of the maximum and minimum, making the unit a kind of medium measure for Earth-to-Sun distance. It is now defined as exactly metres (about 150 million km, or 93 million miles).
The astronomical unit is used primarily as a convenient yardstick for measuring distances within the Solar System. However, it is also a fundamental component in the definition of another critical unit of astronomical length, the parsec.
Development of unit definition.
The Earth's orbit around the Sun is shaped like an ellipse. The semi-major axis of that ellipse is half of a straight line that crosses the orbit at its extremes, the points of aphelion and perihelion, passing through the center of the sun along its way. Since ellipses are well-understood shapes, measuring the points of its extremes defined the exact shape mathematically, and made possible calculations for the entire orbit as well as predictions based upon observation. In addition, it mapped out exactly the largest straight-line distance the earth traverses over the course of a year, defining times and places for observing the largest parallax effects (apparent shifts of position) in nearby stars. Knowing the earth's shift and a star's shift enabled the star's distance to be calculated. But all measurements are subject to some degree of error or uncertainty, and the uncertainties in the length of the au only increased uncertainties in the stellar distances. Improvements in precision have always been a key to improving astronomical understanding. Throughout the twentieth century, measurements became increasingly precise and sophisticated, and ever more dependent upon accurate observation of the effects described by Einstein's theory of relativity and upon the mathematical tools it used.
Improving measurements were continually checked and cross-checked by means of our understanding of the laws of celestial mechanics, which govern the motions of objects in space. The expected positions and distances of objects at an established time are calculated (in au) from these laws, and assembled into a collection of data called an ephemeris. NASA's Jet Propulsion Laboratory provides one of several ephemeris computation services.
In 1976, in order to establish a yet more precise measure for the au, the International Astronomical Union (IAU) formally adopted a new definition. While directly based on the then-best available observational measurements, the definition was recast in terms of the then-best mathematical derivations from celestial mechanics and planetary ephemerides. It stated that 'the astronomical unit of length is that length ('A') for which the Gaussian gravitational constant ('k') takes the value when the units of measurement are the astronomical units of length, mass and time'. Equivalently, one au is the radius of an unperturbed circular Newtonian orbit about the sun of a particle having infinitesimal mass, moving with an angular frequency of radians per day; or alternatively that length for which the heliocentric gravitational constant (the product 'GM'☉) is equal to ()2 au3/d2, when the length is used to describe the positions of objects in the Solar System.
Subsequent explorations of the Solar System by space probes made it possible to obtain precise measurements of the relative positions of the inner planets and other objects by means of radar and telemetry. As with all radar measurements, these rely on measuring the time taken for photons to be reflected from an object. Since all photons move at the speed of light in vacuum, a fundamental constant of the universe, the distance of an object from the probe is basically the product of the speed of light and the measured time. For precision though, the calculations require adjustment for things such as the motions of the probe and object while the photons are in transit. In addition, the measurement of the time itself must be translated to a standard scale that accounts for relativistic time dilation. Comparison of the ephemeris positions with time measurements expressed in the TDB scale leads to a value for the speed of light in astronomical units per day (of 86,400 seconds). By 2009, the IAU had updated its standard measures to reflect improvements, and calculated the speed of light at TDB.
Meanwhile, in 1983, the International Committee for Weights and Measures (CIPM) modified the International System of Units (SI, or 'modern' metric system) to make the metre independent of physical objects entirely, whose measured inaccuracies had become too large for the objects to be useful any more. Instead, it was redefined in terms of the speed of light in vacuum, which could be independently determined at need. The speed of light could then be expressed exactly as 'c'0 = , a standard also adopted by the IERS numerical standards. From this definition and the 2009 IAU standard, the time for light to traverse an au is found to be τA = seconds, more than 8 minutes. By simple multiplication then, the best IAU 2009 estimate was 'A' = 'c'0τA = metres, based on a comparison of JPL and IAA–RAS ephemerides.
This estimate was still derived from observation and measurements subject to error, and based in techniques that did not yet standardize all relativistic effects, and thus were not constant for all observers. In 2012, finding that the equalization of relativity alone would make the definition overly complex, the IAU simply used the 2009 estimate to redefine the astronomical unit as a conventional unit of length directly tied to the metre (exactly ) and assigned it the official abbreviation au. The new definition also recognizes as a consequence that the au unit is now to play a role of reduced importance, limited in its use to that of a convenience in some applications.
Usage.
With the definitions used before 2012, the astronomical unit was dependent on the heliocentric gravitational constant, that is the product of the gravitational constant 'G' and the solar mass 'M'☉. Neither 'G' nor 'M'☉ can be measured to high accuracy in SI units, but the value of their product is known very precisely from observing the relative positions of planets (Kepler's Third Law expressed in terms of Newtonian gravitation). Only the product is required to calculate planetary positions for an ephemeris, which explains why ephemerides are calculated in astronomical units and not in SI units.
The calculation of ephemerides also requires a consideration of the effects of general relativity. In particular, time intervals measured on the surface of the Earth (terrestrial time, TT) are not constant when compared to the motions of the planets: the terrestrial second (TT) appears to be longer in Northern Hemisphere winter and shorter in Northern Hemisphere summer when compared to the 'planetary second' (conventionally measured in barycentric dynamical time, TDB). This is because the distance between the Earth and the Sun is not fixed (it varies between and ) and, when the Earth is closer to the Sun (perihelion), the Sun's gravitational field is stronger and the Earth is moving faster along its orbital path. As the metre is defined in terms of the second, and the speed of light is constant for all observers, the terrestrial metre appears to change in length compared to the 'planetary metre' on a periodic basis.
The metre is defined to be a unit of proper length, but the SI definition does not specify the metric tensor to be used in determining it. Indeed, the International Committee for Weights and Measures (CIPM) notes that 'its definition applies only within a spatial extent sufficiently small that the effects of the non-uniformity of the gravitational field can be ignored.' As such, the metre is undefined for the purposes of measuring distances within the Solar System. The 1976 definition of the astronomical unit was incomplete, in particular because it does not specify the frame of reference in which time is to be measured, but proved practical for the calculation of ephemerides: a fuller definition that is consistent with general relativity was proposed, and 'vigorous debate' ensued until in August 2012 the International Astronomical Union adopted the current definition of 1 astronomical unit = metres.
The au is too small for interstellar distances, where the parsec is commonly used. See the article cosmic distance ladder. The light year is often used in popular works, but is not an approved non-SI unit.
History.
According to Archimedes in the 'Sandreckoner' (2.1), Aristarchus of Samos estimated the distance to the Sun to be times the Earth's radius (the true value is about ). However, the book 'On the Sizes and Distances of the Sun and Moon', which has long been ascribed to Aristarchus, says that he calculated the distance to the Sun to be between 18 and 20 times the distance to the Moon, whereas the true ratio is about 389.174. The latter estimate was based on the angle between the half moon and the Sun, which he estimated as 87° (the true value being close to 89.853°). Depending on the distance Van Helden assumes Aristarchus used for the distance to the Moon, his calculated distance to the Sun would fall between 380 and Earth radii.
According to Eusebius of Caesarea in the 'Praeparatio Evangelica' (Book XV, Chapter 53), Eratosthenes found the distance to the Sun to be 'σταδιων μυριαδας τετρακοσιας και οκτωκισμυριας' (literally 'of 'stadia' myriads 400 and ' but with the additional note that in the Greek text the grammatical agreement is between 'myriads' (not 'stadia') on the one hand and both '400' and ' on the other, as in Greek, unlike English, all three or all four if one were to include 'stadia', words are inflected). This has been translated either as 'stadia' (1903 translation by Edwin Hamilton Gifford), or as 'stadia' (edition of Édouard des Places, dated 1974–1991). Using the Greek stadium of 185 to 190 metres, the former translation comes to a far too low whereas the second translation comes to 148.7 to 152.8 million kilometres (accurate within 2%). Hipparchus also gave an estimate of the distance of the Sun from the Earth, quoted by Pappus as equal to 490 Earth radii. According to the conjectural reconstructions of Noel Swerdlow and G. J. Toomer, this was derived from his assumption of a 'least perceptible' solar parallax of 7 arc minutes.
A Chinese mathematical treatise, the 'Zhoubi suanjing' (c. 1st century BCE), shows how the distance to the Sun can be computed geometrically, using the different lengths of the noontime shadows observed at three places li apart and the assumption that the Earth is flat.
In the 2nd century CE, Ptolemy estimated the mean distance of the Sun as times the Earth radius. To determine this value, Ptolemy started by measuring the Moon's parallax, finding what amounted to a horizontal lunar parallax of 1° 26′, which was much too large. He then derived a maximum lunar distance of Earth radii. Because of cancelling errors in his parallax figure, his theory of the Moon's orbit, and other factors, this figure was approximately correct. He then measured the apparent sizes of the Sun and the Moon and concluded that the apparent diameter of the Sun was equal to the apparent diameter of the Moon at the Moon's greatest distance, and from records of lunar eclipses, he estimated this apparent diameter, as well as the apparent diameter of the shadow cone of the Earth traversed by the Moon during a lunar eclipse. Given these data, the distance of the Sun from the Earth can be trigonometrically computed to be Earth radii. This gives a ratio of solar to lunar distance of approximately 19, matching Aristarchus's figure. Although Ptolemy's procedure is theoretically workable, it is very sensitive to small changes in the data, so much so that changing a measurement by a few percent can make the solar distance infinite.
After Greek astronomy was transmitted to the medieval Islamic world, astronomers made some changes to Ptolemy's cosmological model, but did not greatly change his estimate of the Earth–Sun distance. For example, in his introduction to Ptolemaic astronomy, al-Farghānī gave a mean solar distance of Earth radii, while in his 'zij', al-Battānī used a mean solar distance of Earth radii. Subsequent astronomers, such as al-Bīrūnī, used similar values. Later in Europe, Copernicus and Tycho Brahe also used comparable figures ( and Earth radii), and so Ptolemy's approximate Earth–Sun distance survived through the 16th century.
Johannes Kepler was the first to realize that Ptolemy's estimate must be significantly too low (according to Kepler, at least by a factor of three) in his 'Rudolphine Tables' (1627). Kepler's laws of planetary motion allowed astronomers to calculate the relative distances of the planets from the Sun, and rekindled interest in measuring the absolute value for the Earth (which could then be applied to the other planets). The invention of the telescope allowed far more accurate measurements of angles than is possible with the naked eye. Flemish astronomer Godefroy Wendelin repeated Aristarchus' measurements in 1635, and found that Ptolemy's value was too low by a factor of at least eleven.
A somewhat more accurate estimate can be obtained by observing the transit of Venus. By measuring the transit in two different locations, one can accurately calculate the parallax of Venus and from the relative distance of the Earth and Venus from the Sun, the solar parallax 'α' (which cannot be measured directly). Jeremiah Horrocks had attempted to produce an estimate based on his observation of the 1639 transit (published in 1662), giving a solar parallax of 15 arcseconds, similar to Wendelin's figure. The solar parallax is related to the Earth–Sun distance as measured in Earth radii by
The smaller the solar parallax, the greater the distance between the Sun and the Earth: a solar parallax of 15' is equivalent to an Earth–Sun distance of Earth radii.
Christiaan Huygens believed the distance was even greater: by comparing the apparent sizes of Venus and Mars, he estimated a value of about Earth radii, equivalent to a solar parallax of 8.6'. Although Huygens' estimate is remarkably close to modern values, it is often discounted by historians of astronomy because of the many unproven (and incorrect) assumptions he had to make for his method to work; the accuracy of his value seems to be based more on luck than good measurement, with his various errors cancelling each other out.
Jean Richer and Giovanni Domenico Cassini measured the parallax of Mars between Paris and Cayenne in French Guiana when Mars was at its closest to Earth in 1672. They arrived at a figure for the solar parallax of ', equivalent to an Earth–Sun distance of about Earth radii. They were also the first astronomers to have access to an accurate and reliable value for the radius of the Earth, which had been measured by their colleague Jean Picard in 1669 as thousand 'toises'. Another colleague, Ole Rømer, discovered the finite speed of light in 1676: the speed was so great that it was usually quoted as the time required for light to travel from the Sun to the Earth, or 'light time per unit distance', a convention that is still followed by astronomers today.
A better method for observing Venus transits was devised by James Gregory and published in his 'Optica Promata' (1663). It was strongly advocated by Edmond Halley and was applied to the transits of Venus observed in 1761 and 1769, and then again in 1874 and 1882. Transits of Venus occur in pairs, but less than one pair every century, and observing the transits in 1761 and 1769 was an unprecedented international scientific operation. Despite the Seven Years' War, dozens of astronomers were dispatched to observing points around the world at great expense and personal danger: several of them died in the endeavour. The various results were collated by Jérôme Lalande to give a figure for the solar parallax of 8.6″.
Another method involved determining the constant of aberration, and Simon Newcomb gave great weight to this method when deriving his widely accepted value of 8.80″ for the solar parallax (close to the modern value of ″), although Newcomb also used data from the transits of Venus. Newcomb also collaborated with A. A. Michelson to measure the speed of light with Earth-based equipment; combined with the constant of aberration (which is related to the light time per unit distance) this gave the first direct measurement of the Earth–Sun distance in kilometres. Newcomb's value for the solar parallax (and for the constant of aberration and the Gaussian gravitational constant) were incorporated into the first international system of astronomical constants in 1896, which remained in place for the calculation of ephemerides until 1964. The name 'astronomical unit' appears first to have been used in 1903.
The discovery of the near-Earth asteroid 433 Eros and its passage near the Earth in 1900–1901 allowed a considerable improvement in parallax measurement. Another international project to measure the parallax of 433 Eros was undertaken in 1930–1931.
Direct radar measurements of the distances to Venus and Mars became available in the early 1960s. Along with improved measurements of the speed of light, these showed that Newcomb's values for the solar parallax and the constant of aberration were inconsistent with one another.
Developments.
The unit distance 'A' (the value of the astronomical unit in metres) can be expressed in terms of other astronomical constants:
where 'G' is the Newtonian gravitational constant, 'M'☉ is the solar mass, 'k' is the numerical value of Gaussian gravitational constant and 'D' is the time period of one day.
The Sun is constantly losing mass by radiating away energy, so the orbits of the planets are steadily expanding outward from the Sun. This has led to calls to abandon the astronomical unit as a unit of measurement.
As the speed of light has an exact defined value in SI units and the Gaussian gravitational constant 'k' is fixed in the astronomical system of units, measuring the light time per unit distance is exactly equivalent to measuring the product 'GM'☉ in SI units. Hence, it is possible to construct ephemerides entirely in SI units, which is increasingly becoming the norm.
A 2004 analysis of radiometric measurements in the inner Solar System suggested that the secular increase in the unit distance was much larger than can be accounted for by solar radiation, +15±4 meters per century.
The measurements of the secular variations of the astronomical unit are not confirmed by other authors and are quite controversial.
Furthermore, since 2010, the astronomical unit is not yet estimated by the planetary ephemerides.
Examples.
The distances are approximate mean distances. It has to be taken into consideration that the distances between celestial bodies change in time due to their orbits and other factors.
Other views.
In 2006 the BIPM defined the astronomical unit as , and recommended 'ua' as the symbol for the unit.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1212'>
Artist
An artist is a person engaged in one or more of any of a broad spectrum of activities related to creating art, practicing the arts, and/or demonstrating an art. The common usage in both everyday speech and academic discourse is a practitioner in the visual arts only. The term is often used in the entertainment business, especially in a business context, for musicians and other performers (less often for actors). 'Artiste' (the French for artist) is a variant used in English only in this context. Use of the term to describe writers, for example, is certainly valid, but less common, and mostly restricted to contexts like criticism.
Dictionary definitions.
Wiktionary defines the noun 'artist' (Singular: artist; Plural: artists) as follows:
The Oxford English Dictionary defines the older broad meanings of the term 'artist':
A definition of Artist from Princeton.edu: creative person (a person whose creative work shows sensitivity and imagination).
History of the term.
Although the Greek word 'techně' is often mistranslated as 'art,' it actually implies mastery of any sort of craft. The adjectival Latin form of the word, 'technicus',
became the source of the English words technique, technology, technical.
In Greek culture each of the nine Muses oversaw a different field of human creation:
No muse was identified with the visual arts of painting and sculpture. In ancient Greece sculptors and painters were held in low regard, somewhere between freemen and slaves, their work regarded as mere manual labour.
The word 'art' derives from the Latin 'ars' (stem 'art-'), which, although literally defined, means 'skill method' or 'technique', and conveys a connotation of beauty.
During the Middle Ages the word 'artist' already existed in some countries such as Italy, but the meaning was something resembling 'craftsman', while the word 'artesan' was still unknown. An artist was someone able to do a work better than others, so the skilled excellency was underlined, rather than the activity field. In this period some 'artisanal' products (such as textiles) were much more precious and expensive than paintings or sculptures.
The first division into major and minor arts dates back at least to the works of Leon Battista Alberti (1404-1472): 'De re aedificatoria, De statua, De pictura', which focused on the importance of the intellectual skills of the artist rather than the manual skills (even if in other forms of art there was a project behind).
With the Academies in Europe (second half of 16th century) the gap between fine and applied arts was definitely set.
Many contemporary definitions of 'artist' and 'art' are highly contingent on culture, resisting aesthetic prescription, in much the same way that the features constituting beauty and the beautiful cannot be standardized easily without corruption into kitsch.
The present day concept of an 'artist'.
'Artist' is a descriptive term applied to a person who engages in an activity deemed to be an art. An artist also may be defined unofficially as 'a person who expresses him- or herself through a medium'. The word is also used in a qualitative sense of, a person creative in, innovative in, or adept at, an artistic practice.
Most often, the term describes those who create within a context of the fine arts or 'high culture', activities such as drawing, painting, sculpture, acting, dancing, writing, filmmaking, new media, photography, and music—people who use imagination, talent, or skill to create works that may be judged to have an aesthetic value. Art historians and critics define artists as those who produce art within a recognized or recognizable discipline. Contrasting terms for highly skilled workers in media in the applied arts or decorative arts include artisan, craftsman, and specialized terms such as potter, goldsmith or glassblower. Fine arts artists such as painters succeeded in the Renaissance in raising their status, formerly similar to these workers, to a decisively higher level, but in the 20th century the distinction became rather less relevant .
The term may also be used loosely or metaphorically to denote highly skilled people in any non-'art' activities, as well— law, medicine, mechanics, or mathematics, for example.
Often, discussions on the subject focus on the differences among 'artist' and 'technician', 'entertainer' and 'artisan', 'fine art' and 'applied art', or what constitutes art and what does not. The French word 'artiste' (which in French, simply means 'artist') has been imported into the English language where it means a performer (frequently in Music Hall or Vaudeville). Use of the word 'artiste' can also be a pejorative term.
The English word 'artiste' has thus a narrower range of meaning than the word 'artiste' in French.
In 'Living with Art', Mark Getlein proposes six activities, services or functions of contemporary artists:
After looking at years of data on arts school graduates as well as policies & program outcomes regarding artists, arts, & culture, Elizabeth Lingo and Steven Tepper propose the divide between 'arts for art's sake' artists and commercially successful artists is not as wide as may be perceived, and that 'this bifurcation between the commercial and the noncommercial, the excellent and the base, the elite and the popular, is increasingly breaking down (Eikhof & Haunschild, 2007). Lingo and Tepper point out:
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1213'>
Actaeon
Actaeon (; ), in Greek mythology, son of the priestly herdsman Aristaeus and Autonoe in Boeotia, was a famous Theban hero. Like Achilles in a later generation, he was trained by the centaur Chiron.
He fell to the fatal wrath of Artemis, but the surviving details of his transgression vary: 'the only certainty is in what Aktaion suffered, his pathos, and what Artemis did: the hunter became the hunted; he was transformed into a stag, and his raging hounds, struck with a 'wolf's frenzy' (Lyssa), tore him apart as they would a stag.' This is the iconic motif by which Actaeon is recognized, both in ancient art and in Renaissance and post-Renaissance depictions.
The plot.
Among others, John Heath has observed, 'The unalterable kernel of the tale was a hunter's transformation into a deer and his death in the jaws of his hunting dogs. But authors were free to suggest different motives for his death.' In the version that was offered by the Hellenistic poet Callimachus, which has become the standard setting, Artemis was bathing in the woods when the hunter Actaeon stumbled across her, thus seeing her naked. He stopped and stared, amazed at her ravishing beauty. Once seen, Artemis got revenge on Actaeon: she forbade him speech — if he tried to speak, he would be changed into a stag — for the unlucky profanation of her virginity's mystery. Upon hearing the call of his hunting party, he cried out to them and immediately was changed into a stag. At this he fled deep into the woods, and doing so he came upon a pond and, seeing his reflection, groaned. His own hounds then turned upon him and tore him to pieces, not recognizing him. In an endeavour to save himself, he raised his eyes (and would have raised his arms, had he had them) toward Mount Olympus.The gods did not heed his actions, and he was torn to pieces. An element of the earlier myth made Actaeon the familiar hunting companion of Artemis, no stranger. In an embroidered extension of the myth, the hounds were so upset with their master's death, that Chiron made a statue so lifelike that the hounds thought it was Actaeon.
There are various other versions of his transgression: The Hesiodic 'Catalogue of Women' and pseudo-Apollodoran 'Bibliotheke' state that his offense was that he was a rival of Zeus for Semele, his mother's sister, whereas in Euripides' 'Bacchae' he has boasted that he is a better hunter than Artemis:
Further materials, including fragments that belong with the Hesiodic 'Catalogue of Women' and at least four Attic tragedies, including a 'Toxotides' of Aeschylus, have been lost. Diodorus Siculus (4.81.4), in a variant of Actaeon's 'hubris' that has been largely ignored, has it that Actaeon wanted to marry Artemis. Other authors say the hounds were Artemis' own; some lost elaborations of the myth seem to have given them all names and narrated their wanderings after his loss.
According to the Latin version of the story told by the Roman Ovid having accidentally seen Diana (Artemis) on Mount Cithaeron while she was bathing, he was changed by her into a stag, and pursued and killed by his fifty hounds. This version also appears in Callimachus' Fifth Hymn, as a mythical parallel to the blinding of Tiresias after he sees Athena bathing.
The literary testimony of Actaeon's myth is largely lost, but Lamar Ronald Lacy, deconstructing the myth elements in what survives and supplementing it by iconographic evidence in late vase-painting, made a plausible reconstruction of an ancient Actaeon myth that Greek poets may have inherited and subjected to expansion and dismemberment. His reconstruction opposes a too-pat consensus that has an archaic Actaeon aspiring to Semele, a classical Actaeon boasting of his hunting prowess and a Hellenistic Actaeon glimpsing Artemis' bath. Lacy identifies the site of Actaeon's transgression as a spring sacred to Artemis at Plataea where Actaeon was a ' hero archegetes' ('hero-founder') The righteous hunter, the companion of Artemis, seeing her bathing naked at the spring, was moved to try to make himself her consort, as Diodorus Siculus noted, and was punished, in part for transgressing the hunter's 'ritually enforced deference to Artemis' (Lacy 1990:42).
Names of the dogs who devoured Actaeon.
The following list is as given in Hyginus' 'Fabulae'. The first part of the list is taken from Ovid's 'Metamorphoses' (Book III, 206 - 235), and the second from an unknown source.
'Note:' In the first part of the list, Hyginus fails to correctly differentiate between masculine and feminine names.
Dogs: Melampus, Ichnobates, Pamphagos, Dorceus, Oribasos, Nebrophonos, Laelaps, Theron, Pterelas, Hylaeus, Ladon, Dromas, Tigris, Leucon, Asbolos, Lacon, Aello, Thoos, Harpalos, Melaneus, Labros, Arcas, Argiodus, Hylactor.
Bitches: Agre, Nape, Poemenis, Harpyia, Canache, Sticte, Alce, Lycisce, Lachne, Melanchaetes, Therodamas, Oresitrophos.
Dogs: Acamas, Syrus, Leon, Stilbon, Agrius, Charops, Aethon, Corus, Boreas, Draco, Eudromus, Dromius, Zephyrus, Lampus, Haemon, Cyllopodes, Harpalicus, Machimus, Ichneus, Melampus, Ocydromus, Borax, Ocythous, Pachylus, Obrimus;
Bitches: Argo, Arethusa, Urania, Theriope, Dinomache, Dioxippe, Echione, Gorgo, Cyllo, Harpyia, Lynceste, Leaena, Lacaena, Ocypete, Ocydrome, Oxyrhoe, Orias, *Sagnos, Theriphone, *Volatos, *Chediaetros.
The 'bed of Actaeon'.
In the second century CE, the traveller Pausanias was shown a spring on the road in Attica leading to Plataea from Eleutherae, just beyond Megara 'and a little farther on a rock. It is called the bed of Actaeon, for it is said that he slept thereon when weary with hunting, and that into this spring he looked while Artemis was bathing in it.'
Parallels in Akkadian and Ugarit poems.
In the standard version of the 'Epic of Gilgamesh' (tablet vi) there is a parallel, in the series of examples Gilgamesh gives Ishtar of her mistreatment of her serial lovers:
You loved the herdsman, shepherd and chief shepherd<br>
Who was always heaping up the glowing ashes for you,<br>
And cooked ewe-lambs for you every day.<br>
But you hit him and turned him into a wolf,<br>
His own herd-boys hunt him down<br>
And his dogs tear at his haunches.
Actaeon, torn apart by dogs incited by Artemis, finds another Near Eastern parallel in the Ugaritic hero Aqht, torn apart by eagles incited by Anath who wanted his hunting bow.
The virginal Artemis of classical times is not directly comparable to Ishtar of the many lovers, but the mytheme of Artemis shooting Orion, was linked to her punishment of Actaeon by T.C.W. Stinton; the Greek context of the mortal's reproach to the amorous goddess is translated to the episode of Anchises and Aphrodite. Daphnis too was a herdsman loved by a goddess and punished by her: see Theocritus' First Idyll.
Symbolism regarding Actaeon.
In Greek Mythology, Actaeon is thought by many, including Hans Biedermann, to symbolize ritual human sacrifice in attempt to please a God or Goddess. In the case of Actaeon, the dogs symbolize the sacrificers and Actaeon symbolizes the sacrifice. Actaeon also may symbolize a human curiosity or irreverence.
The myth is seen by Jungian psychologist Wolfgang Giegerich as a symbol for spiritual transformation and/or enlightenment.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1214'>
Anglicanism
Anglicanism is a tradition within Christianity comprising the Church of England and churches which are historically tied to it or have similar beliefs, worship practices and church structures. The word 'Anglican' originates in 'ecclesia anglicana', a medieval Latin phrase dating to at least 1246 that means the 'English Church'. Adherents of Anglicanism are called 'Anglicans'. The great majority of Anglicans are members of churches which are part of the international Anglican Communion. There are, however, a number of churches outside of the Anglican Communion which also consider themselves to be Anglican, most notably those referred to as Continuing Anglican churches, and those which are part of the Anglican realignment movement.
Anglicans found their faith on the Bible, traditions of the apostolic church, apostolic succession ('historic episcopate'), and writings of the Church Fathers. Anglicanism forms one of the branches of Western Christianity; having definitively declared its independence from the Pope at the time of the Elizabethan Religious Settlement. Many of the new Anglican formularies of the mid-16th century corresponded closely to those of contemporary Reformed Protestantism. These reforms in the Church of England were understood by one of those most responsible for them, the then Archbishop of Canterbury Thomas Cranmer, as navigating a middle way between two of the emerging Protestant traditions, namely Lutheranism and Calvinism. By the end of the century, the retention in Anglicanism of many traditional liturgical forms and of the episcopate was already seen as unacceptable by those promoting the most developed Protestant principles.
In the first half of the 17th century the Church of England and associated episcopal churches in Ireland (Church of Ireland) and in England's American colonies were presented by some Anglican divines as comprising a distinct Christian tradition, with theologies, structures and forms of worship representing a different kind of middle way, or 'via media', between Reformed Protestantism and Roman Catholicism — a perspective that came to be highly influential in later theories of Anglican identity, and was expressed in the description 'Catholic and Reformed'.
Following the American Revolution, Anglican congregations in the United States and Canada were each reconstituted into autonomous churches with their own bishops and self-governing structures; which, through the expansion of the British Empire and the activity of Christian missions, was adopted as the model for many newly formed churches, especially in Africa, Australasia and the regions of the Pacific. In the 19th century the term 'Anglicanism' was coined to describe the common religious tradition of these churches; as also that of the Scottish Episcopal Church, which, though originating earlier within the Church of Scotland, had come to be recognised as sharing this common identity. The degree of distinction between Reformed and western Catholic tendencies within the Anglican tradition is routinely a matter of debate both within specific Anglican churches and throughout the Anglican Communion. Unique to Anglicanism is the 'Book of Common Prayer', the collection of services that worshippers in most Anglican churches used for centuries. While it has since undergone many revisions and Anglican churches in different countries have developed other service books, the 'Book of Common Prayer' is still acknowledged as one of the ties that bind the Anglican Communion together.
There is no single Anglican Church with universal juridical authority, since each national or regional church has full autonomy. As the name suggests, the churches of the Anglican Communion are linked by affection and common loyalty. They are in full communion with the See of Canterbury and thus the Archbishop of Canterbury, in his person, is a unique focus of Anglican unity. He calls the once-a-decade Lambeth Conference, chairs the meeting of primates, and is President of the Anglican Consultative Council. With a membership estimated at around 80 million members the Anglican Communion is the third largest Christian communion in the world, after the Catholic Church and the Eastern Orthodox Churches.
Terminology.
The word 'Anglicanism' came into being in the 19th century; constructed from the older word 'Anglican'. The word originally referred only to the teachings and rites of Christians throughout the world in communion with the see of Canterbury, but has come to sometimes be extended to any church following those traditions rather than actual membership in the modern Anglican Communion.
The word 'Anglican' originates in , a Medieval Latin phrase dating to at least 1246 meaning the 'English Church'. As an adjective, 'Anglican' is used to describe the people, institutions and churches, as well as the liturgical traditions and theological concepts, developed by the Church of England. As a noun, an Anglican is a member of a church in the Anglican Communion. The word is also used by followers of separated groups which have left the communion or have been founded separately from it, though this is sometimes considered as a misuse.
Although the term 'Anglican' is found referring to the Church of England as far back as the 16th century, its use did not become general until the latter half of the 19th century. In British parliamentary legislation referring to the English Established Church, there is no need for a description; it is simply the Church of England, though the word 'Protestant' is used in many Acts specifying the succession to the Crown and qualifications for office. When the Union with Ireland Act created the United Church of England and Ireland, it is specified that it shall be one 'Protestant Episcopal Church', thereby distinguishing its form of church government from the Presbyterian polity that prevails in the Church of Scotland.
High Churchmen, who objected to the term 'Protestant', initially promoted the term 'Reformed Episcopal Church'; and it remains the case that the word 'Episcopal' is preferred in the title of the Episcopal Church (the province of the Anglican Communion covering the United States) and the Scottish Episcopal Church, though the full name of the former is 'The Protestant Episcopal Church of the United States of America'. Outside the British Isles, however, the term 'Anglican Church' came to be preferred; as it distinguished these churches from others that maintain an episcopal polity; although some churches, in particular the Scottish Episcopal Church, the Church of Ireland and the Church in Wales continue to use the term only with reservations.
Definition.
Anglicanism, in its structures, theology and forms of worship, is commonly understood as a distinct Christian tradition representing a middle ground between what are perceived to be the extremes of the claims of 16th-century Roman Catholicism and the Lutheran and Reformed varieties of Protestantism of that era. As such, it is often referred to as being a 'via media' (or 'middle way') between these traditions.
The faith of Anglicans is founded in the Scriptures and the Gospels, the traditions of the Apostolic Church, the historical episcopate, the first seven ecumenical councils and the early Church Fathers (among these councils, especially the premier four ones, and among these Fathers, especially those active during the five initial centuries of Christianity, according to the 'quinquasaecularist' principle proposed by the English bishop Lancelot Andrewes and the Lutheran dissident Georg Calixtus). Anglicans understand the Old and New Testaments as 'containing all things necessary for salvation' and as being the rule and ultimate standard of faith. 'Reason' and 'Tradition' are seen as valuable means to interpret Scripture (a position first formulated in detail by Richard Hooker), but there is no full mutual agreement among Anglicans 'exactly how' Scripture, Reason and Tradition interact (or ought to interact) with each other. Anglicans understand the Apostles' Creed as the baptismal symbol and the Nicene Creed as the sufficient statement of the Christian faith.
Anglicans believe the catholic and apostolic faith is revealed in Holy Scripture and the Catholic creeds and interpret these in light of the Christian tradition of the historic church, scholarship, reason and experience.
Anglicans celebrate the traditional sacraments, with special emphasis being given to the Eucharist, also called Holy Communion, the Lord's Supper or the Mass. The Eucharist is central to worship for most Anglicans as a communal offering of prayer and praise in which the life, death and resurrection of Jesus Christ are proclaimed through prayer, reading of the Bible, singing, giving God thanks over the bread and wine for the innumerable benefits obtained through the passion of Christ, the breaking of the bread, and reception of the bread and wine as representing the body and blood of Christ as instituted at the Last Supper. While many Anglicans celebrate the Eucharist in similar ways to the predominant western Catholic tradition, a considerable degree of liturgical freedom is permitted, and worship styles range from the simple to elaborate.
Unique to Anglicanism is the Book of Common Prayer (BCP), the collection of services that worshippers in most Anglican churches used for centuries. It was called 'common prayer' originally because it was intended for use in all Church of England churches which had previously followed differing local liturgies. The term was kept when the church became international because all Anglicans used to share in its use around the world.
In 1549, the first Book of Common Prayer was compiled by Thomas Cranmer, who was then Archbishop of Canterbury. While it has since undergone many revisions and Anglican churches in different countries have developed other service books, the Prayer Book is still acknowledged as one of the ties that bind the Anglican Communion together.
Anglican identity.
Early history.
The founding of Christianity in Britain is commonly attributed to Joseph of Arimathea, according to Anglican legend, and is commemorated in Glastonbury Abbey. Many of the early Church fathers wrote of the presence of Christianity in Roman Britain, with Tertullian stating 'those parts of Britain into which the Roman arms had never penetrated were become subject to Christ'. Saint Alban, who was executed in 209 AD, is the first Christian martyr in the British Isles. Historian Heinrich Zimmer writes that 'Just as Britain was a part of the Roman Empire, so the British Church formed (during the fourth century) a branch of the Catholic Church of the West; and during the whole of that century, from the Council of Arles (316) onward, took part in all proceedings concerning the Church.'
After Roman troops withdrew from Britain, however, the 'absence of Roman military and governmental influence and overall decline of Roman imperial political power enabled Britain and the surrounding isles to develop distinctively from the rest of the West. A new culture emerged around the Irish Sea among the Celtic peoples with Celtic Christianity at its core. What resulted was a form of Christianity distinct from Rome in many traditions and practices.' Historian Charles Thomas, in addition to Celticist Heinrich Zimmer, writes that the distinction between sub-Roman and post-Roman Insular Christianity, also known as Celtic Christianity, began to become apparent around 475 AD, with the Celtic churches allowing married clergy, observing Lent & Easter according to their own calendar, and having a different tonsure; moreover, the Celtic churches operated independently of the Pope's authority, namely a result of their isolated development in the British Isles.
In what is known as the Gregorian Mission, the Roman Catholic Pope Gregory I, sent Augustine of Canterbury to British Isles in 596 AD, with the purpose of evangelizing the pagans there (who were largely Anglo-Saxons), as well as to reconcile the Celtic churches in the British Isles to the See of Rome. In Kent, Augustine persuaded the Anglo-Saxon king 'Æthelberht and his people to accept Christianity.' Augustine, on two occasions, 'met in conference with members of the Celtic episcopacy, but no understanding was reached between them.' Eventually, the 'Christian Church of the Anglo-Saxon kingdom of Northumbria convened the Synod of Whitby in 663/664 to decide whether to follow Celtic or Roman usages.' This meeting, with King Oswiu as the final decision maker, 'led to the acceptance of Roman usage elsewhere in England and brought the English Church into close contact with the Continent.' As a result of assuming Roman usages, the Celtic Church surrendered its independence and from this point on, the Church in England 'was no longer purely Celtic, but became Anglo-Roman-Celtic'. Theologian Christopher L. Webber writes that although 'the Roman form of Christianity became the dominant influence in Britain as in all of western Europe, Anglican Christianity has continued to have a distinctive quality because of its Celtic heritage.'
The Church in England remained united with Rome until the English Parliament, through the Act of Supremacy, declared King Henry VIII to be the Supreme Head of the Church of England in order to fulfill the 'English desire to be independent from continental Europe religiously and politically.' Although now separate from Rome, the English Church, at this point in history, continued to maintain the Roman Catholic theology on many things, such as the sacraments. Under King Edward VI, however, the Church in England underwent what is known as the English Reformation, in the course of which it acquired a number of characteristics that would subsequently become recognised as constituting a distinct, Anglican, identity.
Development.
By the Elizabethan Settlement, the protestant identity of the English and Irish churches was affirmed through parliamentary legislation which assumed allegiance and loyalty to the British Crown in all their members. However, from the first, the Elizabethan Church began to develop distinct religious traditions; assimilating some of the theology of Reformed churches with the services in the Book of Common Prayer (which drew extensively on the Sarum Rite native to England), under the leadership and organisation of a continuing episcopate; and over the years these traditions themselves came to command adherence and loyalty.
Although two important constitutive elements of what later would emerge as Anglicanism, were present in 1559 – the historic episcopate and 'The Book of Common Prayer' – neither the laypeople nor the clergy perceived themselves as Anglicans at the beginning of Elizabeth I's reign. Historical studies on the period 1560-1660 written before the late 1960s tended to project the predominant conformist spirituality and doctrine of the 1660s on the ecclesiastical situation one hundred years before, and there was also a tendency to take polemically binary partitions of reality claimed by contestants studied (such as the dichotomies Protestant-'Popish' or 'Laudian'-'Puritan') at face value. Since the late 1960s these fallacies have been criticized. Studies on the subject written during the last forty-five years have, however, not reached any consensus on how to interpret this period in English church history. The extent to which one or several positions concerning doctrine and spirituality existed alongside with the more well-known and articulate Puritan movement and the Durham House Party, and the exact extent of continental Calvinism among the English elite and among the ordinary churchgoers from the 1560s to the 1620s are subjects of current and on-going debate.
In 1662, under King Charles II, a revised Book of Common Prayer was produced, which was acceptable to high churchmen as well as some Puritans, and is still considered authoritative to this day.
In so far as Anglicans derived their identity from both parliamentary legislation and ecclesiastical tradition, a crisis of identity could result wherever secular and religious loyalties came into conflict – and such a crisis indeed occurred in 1776 with the American Declaration of Independence, most of whose signatories were, at least nominally, Anglican. For these American Patriots, even the forms of Anglican services were in doubt, since the Prayer Book rites of Matins, Evensong and Holy Communion, all included specific prayers for the British Royal Family. Consequently, the conclusion of the War of Independence eventually resulted in the creation of two new Anglican churches, The Episcopal Church in the United States of America in those States that had achieved independence; and in the 1830s The Church of England in Canada became independent from the Church of England in those North American colonies which had remained under British control and to which many Loyalist churchmen had migrated.
Reluctantly, legislation was passed in the British Parliament (the Consecration of Bishops Abroad Act 1786) to allow bishops to be consecrated for an American church outside of allegiance to the British Crown (whereas no bishoprics had ever been established in the former American colonies). Both in the United States and in Canada, the new Anglican churches developed novel models of self-government, collective decision-making, and self-supported financing; that would be consistent with separation of religious and secular identities.
In the following century, two further factors acted to accelerate the development of a distinct Anglican identity. From 1828 and 1829, Dissenters and Catholics could be elected to the House of Commons, which consequently ceased to be a body drawn purely from the established churches of Scotland, England and Ireland; but which nevertheless, over the following ten years, engaged in extensive reforming legislation affecting the interests of the English and Irish churches; which by the Acts of Union of 1800, had been reconstituted as the United Church of England and Ireland. The propriety of this legislation was bitterly contested by the Oxford Movement (Tractarians), who in response developed a vision of Anglicanism as religious tradition deriving ultimately from the Ecumenical Councils of the patristic church. Those within the Church of England opposed to the Tractarians, and to their revived ritual practices, introduced a stream of Parliamentary Bills aimed to control innovations in worship. This only made the dilemma more acute, with consequent continual litigation in the secular and ecclesiastical courts.
Over the same period, Anglican churches engaged vigorously in Christian missions, resulting in the creation, by the end of the century, of over ninety colonial bishoprics; which gradually coalesced into new self-governing churches on the Canadian and American models. However, the case of John William Colenso Bishop of Natal, reinstated in 1865 by the English Judicial Committee of the Privy Council over the heads of the Church in South Africa, demonstrated acutely that the extension of episcopacy had to be accompanied by a recognised Anglican ecclesiology of ecclesiastical authority, distinct from secular power.
Consequently, at the instigation of the bishops of Canada and South Africa, the first Lambeth Conference was called in 1867; to be followed by further conferences in 1878 and 1888, and thereafter at ten-year intervals. The various papers and declarations of successive Lambeth Conferences, have served to frame the continued Anglican debate on identity, especially as relating to the possibility of ecumenical discussion with other churches. This ecumenical aspiration became much more of a possibility, as other denominational groups rapidly followed the example of the Anglican Communion in founding their own transnational alliances: the Alliance of Reformed Churches, the Ecumenical Methodist Council, the International Congregational Council, and the Baptist World Alliance.
Theories.
In their rejection of absolute parliamentary authority, the Tractarians – and in particular John Henry Newman – looked back to the writings of 17th-century Anglican divines, finding in these texts the idea of the English church as a 'via media' between the Protestant and Catholic traditions. This view was associated – especially in the writings of Edward Bouverie Pusey – with the theory of Anglicanism as one of three 'branches' (alongside the Catholic Church and the Orthodox churches) historically arising out of the common tradition of the earliest Ecumenical Councils. Newman himself subsequently rejected the theory of the 'via media', as essentially historicist and static; and hence unable to accommodate any dynamic development within the church. Nevertheless, the aspiration to ground Anglican identity in the writings of the 17th century divines, and in faithfulness to the traditions of the Church Fathers reflects a continuing theme of Anglican ecclesiology, most recently in the writings of Henry Robert McAdoo.
The Tractarian formulation of the theory of the 'via media' was essentially a party platform, and not acceptable to Anglicans outside the confines of the Oxford Movement. However, the theory of the 'via media' was reworked in the ecclesiological writings of Frederick Denison Maurice, in a more dynamic form that became widely influential. Both Maurice and Newman saw the Church of England of their day as sorely deficient in faith; but whereas Newman had looked back to a distant past when the light of faith might have appeared to burn brighter, Maurice looked forward to the possibility of a brighter revelation of faith in the future. Maurice saw the Protestant and Catholic strands within the Church of England as contrary but complementary, both maintaining elements of the true church, but incomplete without the other; such that a true catholic and evangelical church might come into being by a union of opposites.
Central to Maurice's perspective was his belief that the collective elements of family, nation and church represented a divine order of structures through which God unfolds his continuing work of creation. Hence, for Maurice, the Protestant tradition had maintained the elements of national distinction which were amongst the marks of the true universal church, but which had been lost within contemporary Roman Catholicism in the internationalism of centralized Papal Authority. Within the coming universal church that Maurice foresaw, national churches would each maintain the six signs of Catholicity: baptism, Eucharist, the creeds, Scripture, an episcopal ministry, and a fixed liturgy (which could take a variety of forms in accordance with divinely ordained distinctions in national characteristics). Not surprisingly, this vision of a becoming universal church as a congregation of autonomous national churches, proved highly congenial in Anglican circles; and Maurice's six signs were adapted to form the Chicago-Lambeth Quadrilateral of 1888.
In the latter decades of the 20th century, Maurice's theory, and the various strands of Anglican thought that derived from it, have been criticised by Stephen Sykes; who argues that the terms 'Protestant' and 'Catholic' as used in these approaches are synthetic constructs denoting ecclesiastic identities unacceptable to those to whom the labels are applied. Hence, the Catholic Church does not regard itself as a party or strand within the universal church – but rather identifies itself as the universal church. Moreover, Sykes criticises the proposition, implicit in theories of 'via media', that there is no distinctive body of Anglican doctrine, other than those of the universal church; accusing this of being an excuse not to undertake systematic doctrine at all.
Contrariwise, Sykes notes a high degree of commonality in Anglican liturgical forms, and in the doctrinal understandings expressed within those liturgies. He proposes that Anglican identity might rather be found within a shared consistent pattern of prescriptive liturgies, established and maintained through canon law, and embodying both a historic deposit of formal statements of doctrine, and also framing the regular reading and proclamation of scripture. Sykes nevertheless agrees with those heirs of Maurice who emphasise the incompleteness of Anglicanism as a positive feature, and quotes with qualified approval the words of Michael Ramsey:
Doctrine.
'Catholic and Reformed'.
In the time of Henry VIII the nature of Anglicanism was based on questions of jurisdiction – specifically, the belief of the Crown that national churches should be autonomous – rather than theological disagreement. The effort was to create a national church in legal continuity with its traditions, but inclusive of certain doctrinal and liturgical beliefs of the Reformers. The result has been a movement with a distinctive self-image among Christian movements. The question often arises as to whether the Anglican Communion should be identified as a Protestant or Catholic church, or perhaps as a distinct branch of Christianity altogether.
The distinction between Reformed and Catholic, and the coherence of the two, is routinely a matter of debate both within specific Anglican churches and throughout the Anglican Communion by members themselves. Since the Oxford Movement of the mid-19th century, many churches of the communion have revived and extended liturgical and pastoral practices similar to Roman Catholicism. This extends beyond the ceremony of High Church services to even more theologically significant territory, such as sacramental theology (see Anglican sacraments). While Anglo-Catholic practices, particularly liturgical ones, have resurfaced and become more common within the tradition over the last century, there remain many places where practices and beliefs remain on the more Reformed or Evangelical side (see Sydney Anglicanism).
Guiding principles.
For High Church Anglicans, doctrine is neither established by a magisterium, nor derived from the theology of an eponymous founder (such as Calvinism), nor summed up in a confession of faith beyond the ecumenical creeds (such as the Lutheran Book of Concord). For them, the earliest Anglican theological documents are its prayer books, which they see as the products of profound theological reflection, compromise and synthesis. They emphasise the Book of Common Prayer as a key expression of Anglican doctrine. The principle of looking to the prayer books as a guide to the parameters of belief and practice is called by the Latin name 'lex orandi, lex credendi' ('the law of prayer is the law of belief').
Within the prayer books are the fundamentals of Anglican doctrine: the Apostles' and Nicene creeds, the Athanasian Creed (now rarely used), the scriptures (via the lectionary), the sacraments, daily prayer, the catechism and apostolic succession in the context of the historic threefold ministry. For some Low Church and Evangelical Anglicans, the 16th-century Reformed Thirty-Nine Articles form the basis of doctrine.
Distinctives of Anglican belief.
The Thirty-Nine Articles initially played a significant role in Anglican doctrine and practice. Following the passing of the 1604 canons, all Anglican clergy had to formally subscribe to the articles. Today, however, the articles are no longer binding, but are seen as a historical document which has played a significant role in the shaping of Anglican identity. The degree to which each of the articles has remained influential varies.
On the doctrine of justification, for example, there is a wide range of beliefs within the Anglican Communion, with some Anglo-Catholics arguing for a faith with good works and the sacraments. At the same time, however, some Evangelical Anglicans ascribe to the Reformed emphasis on 'sola fide' ('faith alone') in their doctrine of justification (see Sydney Anglicanism.) Still other Anglicans adopt a nuanced view of justification, taking elements from the early Church Fathers, Catholicism, Protestantism, liberal theology and latitudinarian thought.
Arguably, the most influential of the original articles has been Article VI on the 'sufficiency of scripture' which says that 'Scripture containeth all things necessary to salvation: so that whatsoever is not read therein, nor may be proved thereby, is not to be required of any man, that it should be believed as an article of the Faith, or be thought requisite or necessary to salvation.' This article has informed Anglican biblical exegesis and hermeneutics since earliest times.
Anglicans look for authority in their 'standard divines' (see below). Historically, the most influential of these – apart from Cranmer – has been the 16th century cleric and theologian Richard Hooker who after 1660 was increasingly portrayed as the founding father of Anglicanism. Hooker's description of Anglican authority as being derived primarily from scripture, informed by reason (the intellect and the experience of God) and tradition (the practices and beliefs of the historical church), has influenced Anglican self-identity and doctrinal reflection perhaps more powerfully than any other formula. The analogy of the 'three-legged stool' of scripture, reason, and tradition is often incorrectly attributed to Hooker. Rather Hooker's description is a hierarchy of authority, with scripture as foundational and reason and tradition as vitally important, but secondary, authorities.
Finally, the extension of Anglicanism into non-English cultures, the growing diversity of prayer books and the increasing interest in ecumenical dialogue, has led to further reflection on the parameters of Anglican identity. Many Anglicans look to the Chicago-Lambeth Quadrilateral of 1888 as the 'sine qua non' of communal identity. In brief, the Quadrilateral's four points are the scriptures, as containing all things necessary to salvation; the creeds (specifically, the Apostles' and Nicene Creeds) as the sufficient statement of Christian faith; the dominical sacraments of Baptism and Holy Communion; and the historic episcopate.
Anglican divines.
Within the Anglican tradition, 'divines' are clergy whose theological writings have been considered standards for faith, doctrine, worship and spirituality and whose influence has permeated the Anglican Communion in varying degrees through the years. While there is no authoritative list of these Anglican divines, there are some whose names would likely be found on most lists – those who are commemorated in lesser feasts of the Anglican churches and those whose works are frequently anthologised.
The corpus produced by Anglican divines is diverse. What they have in common is a commitment to the faith as conveyed by scripture and the 'Book of Common Prayer', thus regarding prayer and theology in a manner akin to that of the Apostolic Fathers. On the whole, Anglican divines view the 'via media' of Anglicanism not as a compromise, but as 'a positive position, witnessing to the universality of God and God's kingdom working through the fallible, earthly 'ecclesia Anglicana'.'
These theologians regard scripture as interpreted through tradition and reason as authoritative in matters concerning salvation. Reason and tradition, indeed, is extant in and presupposed by scripture, thus implying co-operation between God and humanity, God and nature, and between the sacred and secular. Faith is thus regarded as incarnational and authority as dispersed.
Among the early Anglican divines of the 16th and 17th centuries, the names of Thomas Cranmer, John Jewel, Matthew Parker, Richard Hooker, Lancelot Andrewes and Jeremy Taylor predominate. The influential character of Hooker's 'Of the Laws of Ecclesiastical Polity' cannot be overestimated. Published in 1593 and subsequently, Hooker's eight-volume work is primarily a treatise on church-state relations, but it deals comprehensively with issues of biblical interpretation, soteriology, ethics and sanctification. Throughout the work, Hooker makes clear that theology involves prayer and is concerned with ultimate issues and that theology is relevant to the social mission of the church.
The 18th century saw the rise of two important movements in Anglicanism: Cambridge Platonism, with its mystical understanding of reason as the 'candle of the Lord' and the Evangelical Revival with its emphasis on the personal experience of the Holy Spirit. The Cambridge Platonist movement evolved into a school called Latitudinarianism, which emphasised reason as the barometer of discernment and took a stance of indifference towards doctrinal and ecclesiological differences.
The Evangelical Revival, influenced by such figures as John Wesley and Charles Simeon, re-emphasised the importance of justification through faith and the consequent importance of personal conversion. Some in this movement, such as Wesley and George Whitefield, took the message to the United States, influencing the First Great Awakening and creating an Anglo-American movement called Methodism that would eventually break away, structurally, from the Anglican churches after the American Revolution.
By the 19th century, there was a renewed interest in pre-Reformation English religious thought and practice. Theologians such as John Keble, Edward Bouverie Pusey and John Henry Newman had widespread influence in the realm of polemics, homiletics and theological and devotional works, not least because they largely repudiated the old High Church tradition and replaced it with a dynamic appeal to antiquity which looked beyond the Reformers and Anglican formularies. Their work is largely credited with the development of the Oxford Movement, which sought to reassert Catholic identity and practice in Anglicanism.
In contrast to this movement, clergy such as the Bishop of Liverpool, John Charles Ryle, sought to uphold the distinctly Reformed identity of the Church of England. He was not a servant of the status quo, but argued for a lively religion which emphasised grace, holy and charitable living and the plain use of the 1662 Book of Common Prayer (interpreted in a partisan Evangelical way) without additional rituals. Frederick Denison Maurice, through such works as 'The Kingdom of Christ', played a pivotal role in inaugurating another movement, Christian socialism. In this, Maurice transformed Hooker's emphasis on the incarnational nature of Anglican spirituality to an imperative for social justice.
In the 19th century, Anglican biblical scholarship began to assume a distinct character, represented by the so-called 'Cambridge triumvirate' of Joseph Lightfoot, F. J. A. Hort and Brooke Foss Westcott. Their orientation is best summed up by Lightfoot's observation that 'Life which Christ is and which Christ communicates, the life which fills our whole beings as we realise its capacities, is active fellowship with God.'
The earlier part of the 20th century is marked by Charles Gore, with his emphasis on natural revelation, and William Temple's focus on Christianity and society, while from outside England, Robert Leighton, Archbishop of Glasgow, and several clergy from the United States have been suggested, such as William Porcher DuBose, John Henry Hobart (1775–1830, Bishop of New York 1816–30), William Meade, Phillips Brooks and Charles Henry Brent.
Churchmanship.
'Churchmanship' can be defined as the manifestation of theology in the realms of liturgy, piety and, to some extent, spirituality. Anglican diversity in this respect has tended to reflect the diversity in the tradition's Reformed and Catholic identity. Different individuals, groups, parishes, dioceses and provinces may identify more closely with one or the other, or some mixture of the two.
The range of Anglican belief and practice became particularly divisive during the 19th century when some clergy were disciplined and even imprisoned on charges of ritual heresy while, at the same time, others were criticised for engaging in public worship services with ministers of Reformed churches. Resistance to the growing acceptance and restoration of traditional Catholic ceremonial by the mainstream of Anglicanism ultimately led to the formation of small breakaway churches such as the Free Church of England in England (1844) and the Reformed Episcopal Church in North America (1873).
Anglo-Catholic (and some Broad Church) Anglicans celebrate public liturgy in ways that understand worship to be something very special and of utmost importance. Vestments are worn by the clergy, sung settings are often used and incense may be used. Nowadays, in most Anglican churches, the Eucharist is celebrated in a manner similar to the usage of Catholics and some Lutherans though, in many churches, more traditional, 'pre-Vatican II', models of worship are common, (e.g. an 'eastward orientation' at the altar). Whilst many Anglo-Catholics derive much of their liturgical practice from that of the pre-Reformation English church, others more closely follow traditional Roman Catholic practices.
The Eucharist may sometimes be celebrated in the form known as High Mass, with a priest, deacon and subdeacon dressed in traditional vestments, with incense and sanctus bells and with prayers adapted from the Roman Missal or other sources by the celebrant. Such churches may also have forms of Eucharistic adoration such as Benediction of the Blessed Sacrament. In terms of personal piety some Anglicans may recite the rosary and angelus, be involved in a devotional society dedicated to 'Our Lady' (the Blessed Virgin Mary) and seek the intercession of the saints.
In recent years the prayer books of several provinces have, out of deference to a greater agreement with Eastern Conciliarism (and a perceived greater respect accorded Anglicanism by Eastern Orthodoxy than by Roman Catholicism), instituted a number of historically Eastern and Oriental Orthodox elements in their liturgies, including introduction of the Trisagion and deletion of the filioque clause from the Nicene Creed.
For their part, those Evangelical (and some Broad Church) Anglicans who emphasise the more Protestant aspects of the Church stress the Reformation theme of salvation by grace through faith. They emphasise the two dominical sacraments of Baptism and Eucharist, viewing the other five as 'lesser rites'. Some Evangelical Anglicans may even tend to take the inerrancy of Scripture literally, adopting the view of Article VI that it contains all things necessary to salvation in an explicit sense. Worship in churches influenced by these principles tends to be significantly less elaborate, with greater emphasis on the Liturgy of the Word (the reading of the scriptures, the sermon and the intercessory prayers).
The Order for Holy Communion may be celebrated bi-weekly or monthly (in preference to the daily offices), by priests attired in choir habit, or more regular clothes, rather than Eucharistic vestments. Ceremony may be in keeping with their view of the provisions of the 17th century Puritans – being a Reformed interpretation of the Ornaments Rubric – no candles, no incense, no bells and a minimum of manual actions by the presiding celebrant (such as touching the elements at the Words of Institution).
In recent decades there has been a growth of charismatic worship among Anglicans. Both Anglo-Catholics and Evangelicals have been affected by this movement such that it is not uncommon to find typically charismatic postures, music, and other themes evident during the services of otherwise Anglo-Catholic or Evangelical parishes.
The spectrum of Anglican beliefs and practice is too large to be fit into these labels. Many Anglicans locate themselves somewhere in the spectrum of the Broad Church tradition and consider themselves an amalgam of Evangelical and Catholic. Such Anglicans stress that Anglicanism is the 'via media' (middle way) between the two major strains of Western Christianity and that Anglicanism is like a 'bridge' between the two strains.
Sacramental doctrine and practice.
In accord with its prevailing self-identity as a 'via media' or 'middle path' of Western Christianity, Anglican sacramental theology expresses elements in keeping with its status as being both a church in the Catholic tradition as well as a Reformed church. With respect to sacramental theology the Catholic heritage is perhaps most strongly asserted in the importance Anglicanism places on the sacraments as a means of grace, sanctification and salvation as expressed in the church's liturgy and doctrine.
Of the seven sacraments, all Anglicans recognise Baptism and the Eucharist as being directly instituted by Christ. The other five — Confession and absolution, Matrimony, Confirmation, Holy Orders (also called Ordination) and Anointing of the Sick (also called Unction) — are regarded variously as full sacraments by Anglo-Catholics, many High Church and some Broad Church Anglicans, but merely as 'sacramental rites' by other Broad Church and Low Church Anglicans, especially Evangelicals associated with Reform UK and the Diocese of Sydney.
Eucharistic theology.
Anglican eucharistic theology is divergent in practice, reflecting the essential comprehensiveness of the tradition. Some Low Church Anglicans take a strictly memorialist (Zwinglian) view of the sacrament. In other words, they see Holy Communion as a memorial to Christ's suffering, and participation in the Eucharist as both a re-enactment of the Last Supper and a foreshadowing of the heavenly banquet – the fulfilment of the eucharistic promise.
Other Low Church Anglicans believe in the Real Presence but deny that the presence of Christ is carnal or is necessarily localised in the bread and wine. Despite explicit criticism in the Thirty-Nine Articles, many High Church or Anglo-Catholic Anglicans hold, more or less, the Catholic view of the Real Presence as expressed in the doctrine of transubstantiation, seeing the Eucharist as a liturgical representation of Christ's atoning sacrifice with the elements actually transformed into Christ's body and blood.
The majority of Anglicans, however, have in common a belief in the Real Presence, defined in one way or another. To that extent, they are in the company of the continental reformer Martin Luther rather than Ulrich Zwingli.
A famous Anglican aphorism regarding Christ's presence in the sacrament is found in a poem by John Donne:
An Anglican position on the eucharistic sacrifice ('Sacrifice of the Mass') was expressed in the response 'Saepius Officio' of the Archbishops of Canterbury and York to Pope Leo XIII's Papal Encyclical 'Apostolicae curae'.
Anglican and Catholic representatives declared that they had 'substantial agreement on the doctrine of the Eucharist' in the 'Windsor Statement on Eucharistic Doctrine' from the Anglican-Roman Catholic International Consultation (1971)] and the . The to these documents by the Vatican made it plain that it did not consider the degree of agreement reached to be satisfactory.
Practices.
In Anglicanism there is a distinction between liturgy, which is the formal public and communal worship of the Church, and personal prayer and devotion which may be public or private. Liturgy is regulated by the prayer books and consists of the Holy Eucharist (some call it Holy Communion or Mass), the other six Sacraments, and the Divine Office or Liturgy of the Hours.
Book of Common Prayer.
The 'Book of Common Prayer' (BCP) is the foundational prayer book of Anglicanism. The original book of 1549 (revised 1552) was one of the instruments of the English Reformation, replacing the various 'uses' or rites in Latin that had been used in different parts of the country with a single compact volume in the language of the people, so that 'now from henceforth all the Realm shall have but one use'. Suppressed under Queen Mary I, it was revised in 1559, and then again in 1662, after the Restoration of Charles II. This version was made mandatory in England and Wales by the Act of Uniformity and was in standard use until the mid-20th century.
With British colonial expansion from the 17th century onwards, the Anglican church was planted around the globe. These churches at first used and then revised the Book of Common Prayer, until they, like their parent church, produced prayer books which took into account the developments in liturgical study and practice in the 19th and 20th centuries, which come under the general heading of the Liturgical Movement.
Worship.
Anglican worship services are open to all visitors. Anglican worship originates principally in the reforms of Thomas Cranmer, who aimed to create a set order of service like that of the pre-Reformation church but less complex in its seasonal variety and said in English rather than Latin. This use of a set order of service is not unlike the Catholic tradition. Traditionally the pattern was that laid out in the Book of Common Prayer. Although many Anglican churches now use a wide range of modern service books written in the local language, the structures of the Book of Common Prayer are largely retained. Churches which call themselves Anglican will have identified themselves so because they use some form or variant of the Book of Common Prayer in the shaping of their worship.
Anglican worship, however, is as diverse as Anglican theology. A contemporary 'low church' or Evangelical service may differ little from the worship of many mainstream non-Anglican Protestant churches. The service is constructed around a sermon focused on Biblical exposition and opened with one or more Bible readings and closed by a series of prayers (both set and extemporised) and hymns or songs. A 'high church' or Anglo-Catholic service, by contrast, is usually a more formal liturgy celebrated by clergy in distinctive vestments and may be almost indistinguishable from a Roman Catholic service, often resembling the 'pre-Vatican II' Tridentine rite.
Between these extremes are a variety of styles of worship, often involving a robed choir and the use of the organ to accompany the singing and to provide music before and after the service. Anglican churches tend to have pews or chairs and it is usual for the congregation to kneel for some prayers but to stand for hymns and other parts of the service such as the Gloria, Collect, Gospel reading, Creed and either the Preface or all of the Eucharistic Prayer. High Anglicans may genuflect or cross themselves in the same way as Catholics.
Other more traditional Anglicans tend to follow the 1662 Book of Common Prayer, and retain the use of the King James Bible. This is typical in many Anglican cathedrals and particularly in Royal Peculiars such as the Savoy Chapel and the Queen's Chapel. These services reflect the original Anglican doctrine and differ from the Traditional Anglican Communion in that they are in favour of women vicars and the ability of vicars to marry. These Anglican church services include classical music instead of songs, hymns from the New English Hymnal (usually excluding modern hymns such as Lord of the Dance), and are generally non-evangelical and formal in practice. Due to their association with royalty, these churches are generally host to staunch Anglicans who are strongly opposed to Catholicism.
Until the mid-20th century the main Sunday service was typically morning prayer, but the Eucharist has once again become the standard form of Sunday worship in many Anglican churches; this again is similar to Roman Catholic practice. Other common Sunday services include an early morning Eucharist without music, an abbreviated Eucharist following a service of morning prayer and a service of evening prayer, sometimes in the form of sung Evensong, usually celebrated between 3 and 6 pm The late-evening service of Compline was revived in parish use in the early 20th century. Many Anglican churches will also have daily morning and evening prayer and some have midweek or even daily celebration of the Eucharist.
An Anglican service (whether or not a Eucharist) will include readings from the Bible that are generally taken from a standardised lectionary, which provides for much of the Bible (and some passages from the Apocrypha) to be read out loud in the church over a cycle of one, two or three years (depending on which eucharistic and office lectionaries are used, respectively). The sermon (or homily) is typically about ten to twenty minutes in length, though it may be much longer in Evangelical churches. Even in the most informal Evangelical services it is common for set prayers such as the weekly Collect to be read. There are also set forms for intercessory prayer, though this is now more often extemporaneous. In high and Anglo-Catholic churches there are generally prayers for the dead.
Although Anglican public worship is usually ordered according to the canonically approved services, in practice many Anglican churches use forms of service outside these norms. Many Evangelical churches, as well as extreme Anglo-Catholic ones, sit lightly to the set forms of morning and evening prayer, though generally respecting the canonical order of Holy Communion. Liberal churches may use freely structured or experimental forms of worship, including patterns borrowed from ecumenical traditions such as those of Taizé Community or the Iona Community.
Anglo-Catholic parishes might use the modern Roman Catholic liturgy of the Mass or more traditional forms, such as the Tridentine Mass (which is translated into English in the English Missal), the Anglican Missal, or, less commonly, the Sarum Rite. Catholic devotions such as the Rosary, Angelus and Benediction of the Blessed Sacrament are also common among Anglo-Catholics.
Eucharistic discipline.
Only baptised persons are eligible to receive communion, although in many churches communion is restricted to those who have not only been baptised but also confirmed. In many Anglican provinces, however, all baptised Christians are now often invited to receive communion and some dioceses have regularised a system for admitting baptised young people to communion before they are confirmed.
The discipline of fasting before communion is practised by some Anglicans. Most Anglican priests require the presence of at least one other person for the celebration of the Eucharist (referring back to Christ's statement in Math 18:20 'When two or more are gathered in my name, I will be in the midst of them'), though some Anglo-Catholic priests (like Roman Catholic priests) may say private Masses. As in the Catholic Church, it is a canonical requirement to use fermented wine for the Communion.
Unlike in mainstream Catholicism, the consecrated bread and wine are always offered together to the congregation in a Eucharistic service ('Communion in Both Kinds'). This practice is gradually being adopted in the Catholic Church too, especially through the Neocatechumenal Way. In some churches the sacrament is reserved in a tabernacle or aumbry with a lighted candle or lamp nearby. In Anglican churches, only a priest or a bishop may be the celebrant at the Eucharist.
Divine office.
All Anglican prayer books contain offices for Morning Prayer (Matins) and Evening Prayer (Evensong). In the original Book of Common Prayer these were derived from combinations of the ancient monastic offices of Matins and Lauds; and Vespers and Compline respectively. The prayer offices have an important place in Anglican history.
Prior to the Catholic revival of the 19th century, which eventually restored the Holy Eucharist as the principal Sunday liturgy, and especially during the 18th century, a morning service combining Matins, the Litany and ante-Communion comprised the usual expression of common worship; while Matins and Evensong were sung daily in cathedrals and some collegiate chapels. This nurtured a tradition of distinctive Anglican chant applied to the canticles and psalms used at the offices (although plainsong is often used as well).
In some official and many unofficial Anglican service books these offices are supplemented by other offices such as the Little Hours of Prime and prayer during the day such as (Terce, Sext, None and Compline). Some Anglican monastic communities have a Daily Office based on that of the Book of Common Prayer but with additional antiphons and canticles, etc. for specific days of the week, specific psalms, etc. See, for example, Order of the Holy Cross and Order of St Helena, editors, 'A Monastic Breviary' (Wilton, Conn.: Morehouse-Barlow, 1976). The All Saints Sisters of the Poor, with convents in Catonsville, Maryland and elsewhere use an elaborated version of the Anglican Daily Office. The Society of St. Francis publishes Celebrating Common Prayer which has become especially popular for use among Anglicans.
In England, the United States, Canada, Australia, New Zealand and some other Anglican provinces the modern prayer books contain four offices:
In addition, most prayer books include a section of prayers and devotions for family use. In the US, these offices are further supplemented by an 'Order of Worship for the Evening', a prelude to or an abbreviated form of Evensong, partly derived from Orthodox prayers. In the United Kingdom, the publication of 'Daily Prayer', the third volume of Common Worship was published in 2005. It retains the services for Morning and Evening Prayer and Compline and includes a section entitled 'Prayer during the Day'. 'A New Zealand Prayer Book' of 1989 provides different outlines for Matins and Evensong on each day of the week, as well as 'Midday Prayer', 'Night Prayer' and 'Family Prayer'.
Some Anglicans who pray the office on daily basis use the present Divine Office of the Catholic Church. In many cities, especially in England, Anglican and Catholic priests and lay people often meet several times a week to pray the office in common. A small but enthusiastic minority use the Anglican Breviary, or other translations and adaptations of the Pre-Vatican II Roman Rite and Sarum Rite, along with supplemental material from cognate western sources, to provide such things as a common of Octaves, a common of Holy Women and other additional material. Others may privately use idiosyncratic forms borrowed from a wide range of Christian traditions.
'Quires and Places where they sing'.
In the late medieval period, many English cathedrals and monasteries had established small choirs of trained lay clerks and boy choristers to perform polyphonic settings of the Mass in their Lady Chapels. Although these 'Lady Masses' were discontinued at the Reformation, the associated musical tradition was maintained in the Elizabethan Settlement through the establishment of choral foundations for daily singing of the Divine Office by expanded choirs of men and boys. This resulted from an explicit addition by Elizabeth herself to the injunctions accompanying the 1559 Book of Common Prayer (that had itself made no mention of choral worship) by which existing choral foundations and choir schools were instructed to be continued, and their endowments secured. Consequently, some thirty-four cathedrals, collegiate churches and royal chapels maintained paid establishments of lay singing men and choristers in the late 16th century.
All save four of these have – with an interruption during the Commonwealth – continued daily choral prayer and praise to this day. In the Offices of Matins and Evensong in the 1662 Book of Common Prayer, these choral establishments are specified as 'Quires and Places where they sing'.
For nearly three centuries, this round of daily professional choral worship represented a tradition entirely distinct from that embodied in the intoning of Parish Clerks, and the singing of 'west gallery choirs' which commonly accompanied weekly worship in English parish churches. In 1841, the rebuilt Leeds Parish Church established a surpliced choir to accompany parish services, drawing explicitly on the musical traditions of the ancient choral foundations. Over the next century, the Leeds example proved immensely popular and influential for choirs in cathedrals, parish churches and schools throughout the Anglican communion. More or less extensively adapted, this choral tradition also became the direct inspiration for robed choirs leading congregational worship in a wide range of Christian denominations.
In 1719 the cathedral choirs of Gloucester, Hereford and Worcester combined to establish the annual Three Choirs Festival, the precursor for the multitude of summer music festivals since. By the 20th century, the choral tradition had become for many the most accessible face of worldwide Anglicanism – especially as promoted through the regular broadcasting of choral evensong by the BBC; and also in the annual televising of the festival of Nine lessons and carols from King's College, Cambridge. Composers closely concerned with this tradition include Edward Elgar, Ralph Vaughan Williams, Gustav Holst, Charles Villiers Stanford and Benjamin Britten. A number of important 20th century works by non-Anglican composers were originally commissioned for the Anglican choral tradition – for example the 'Chichester Psalms' of Leonard Bernstein, and the 'Nunc dimittis' of Arvo Pärt.
Organisation of the Anglican Communion.
Principles of governance.
Contrary to popular misconception, the British monarch is not the constitutional 'head' but in law the 'Supreme Governor' of the Church of England, nor does he or she have any role in provinces outside England. The role of the crown in the Church of England is practically limited to the appointment of bishops, including the Archbishop of Canterbury, and even this role is limited, as the Church presents the government with a short list of candidates to choose from. This process is accomplished through collaboration with and consent of ecclesial representatives '(see Ecclesiastical Commissioners)'. The monarch has no constitutional role in Anglican churches in other parts of the world, although the prayer books of several countries where she is head of state maintain prayers for her as sovereign.
A characteristic of Anglicanism is that it has no international juridical authority. All thirty-nine provinces of the Anglican Communion are autonomous, each with their own primate and governing structure. These provinces may take the form of national churches (such as in Canada, Uganda, or Japan) or a collection of nations (such as the West Indies, Central Africa, or South Asia), or geographical regions (such as Vanuatu and Solomon Islands) etc. Within these Communion provinces may exist subdivisions, called ecclesiastical provinces, under the jurisdiction of a metropolitan archbishop.
All provinces of the Anglican Communion consist of dioceses, each under the jurisdiction of a bishop. In the Anglican tradition, bishops must be consecrated according to the strictures of apostolic succession, which Anglicans consider one of the marks of Catholicity. Apart from bishops, there are two other orders of ordained ministry: deacon and priest.
No requirement is made for clerical celibacy, though many Anglo-Catholic priests have traditionally been bachelors. Because of innovations that occurred at various points after the latter half of the 20th century, women may be ordained as deacons in almost all provinces, as priests in some, and as bishops in a few provinces. Anglican religious orders and communities, suppressed in England during the Reformation, have re-emerged, especially since the mid-19th century, and now have an international presence and influence.
Government in the Anglican Communion is synodical, consisting of three houses of laity (usually elected parish representatives), clergy, and bishops. National, provincial, and diocesan synods maintain different scopes of authority, depending on their canons and constitutions. Anglicanism is not congregational in its polity: it is the diocese, not the parish church, which is the smallest unit of authority in the church. '(See Episcopal polity)'.
Archbishop of Canterbury.
The Archbishop of Canterbury has a precedence of honour over the other primates of the Anglican Communion, and for a province to be considered a part of the Communion means specifically to be in full communion with the See of Canterbury. The Archbishop is, therefore, recognised as 'primus inter pares', or first amongst equals even though he does not exercise any direct authority in any province outside England, of which he is chief primate. Rowan Williams, the Archbishop of Canterbury from 2002 to 2012, was the first archbishop appointed from outside the Church of England since the Reformation: he was formerly the Archbishop of Wales.
As 'spiritual head' of the Communion, the Archbishop of Canterbury maintains a certain moral authority, and has the right to determine which churches will be in communion with his See. He hosts and chairs the Lambeth Conferences of Anglican Communion bishops, and decides who will be invited to them. He also hosts and chairs the Anglican Communion Primates' Meeting and is responsible for the invitations to it. He acts as president of the secretariat of the Anglican Communion Office, and its deliberative body, the Anglican Consultative Council.
Conferences.
The Anglican Communion has no international juridical organisation. All international bodies are consultative and collaborative, and their resolutions are not legally binding on the autonomous provinces of the Communion. There are three international bodies of note.
Ordained ministry.
Like the Catholic Church and the Orthodox churches, the Anglican Communion maintains the threefold ministry of deacons, presbyters (usually called 'priests') and bishops.
Episcopate.
Bishops, who possess the fullness of Christian priesthood, are the successors of the Apostles. Primates, archbishops and metropolitans are all bishops and members of the historical episcopate who derive their authority through apostolic succession – an unbroken line of bishops that can be traced back to the 12 apostles of Jesus.
Priesthood.
Bishops are assisted by priests and deacons. Most ordained ministers in the Anglican Communion are priests, who usually work in parishes within a diocese. Priests are in charge of the spiritual life of parishes and are usually called the rector or vicar. A curate (or, more correctly, an 'assistant curate') is a term often used for a priest or deacon who assists the parish priest. Non-parochial priests may earn their living by any vocation, although employment by educational institutions or charitable organisations is most common. Priests also serve as chaplains of hospitals, schools, prisons, and in the armed forces.
An archdeacon is a priest or deacon responsible for administration of an archdeaconry, which is often the name given to the principal subdivisions of a diocese. An archdeacon represents the diocesan bishop in his or her archdeaconry. In the Church of England the position of archdeacon can only be held by someone in priestly orders who has been ordained for at least six years. In some other parts of the Anglican Communion the position can also be held by deacons. In parts of the Anglican Communion where women cannot be ordained as priests or bishops but can be ordained as deacons, the position of archdeacon is effectively the most senior office an ordained woman can be appointed to.
A dean is a priest who is the principal cleric of a cathedral or other collegiate church and the head of the chapter of canons. If the cathedral or collegiate church has its own parish, the dean is usually also rector of the parish. However, in the Church of Ireland the roles are often separated and most cathedrals in the Church of England do not have associated parishes. In the Church in Wales, however, most cathedrals are parish churches and their deans are now also vicars of their parishes.
The Anglican Communion recognizes Roman Catholic and Eastern Orthodox ordinations as valid. Outside the Anglican Communion, Anglican ordinations (at least of male priests) are recognized by the Old Catholic Church Provoo Communion Lutherans and various Independent Catholic churches.
Diaconate.
In Anglican churches, deacons often work directly in ministry to the marginalized inside and outside the church: the poor, the sick, the hungry, the imprisoned. Unlike Orthodox and most Roman Catholic deacons who may be married only before ordination, deacons are permitted to marry freely both before and after ordination, as are priests. Most deacons are preparing for priesthood and usually only remain as deacons for about a year before being ordained priests. However, there are some deacons who remain so.
Many provinces of the Anglican Communion ordain both men and women as deacons. Many of those provinces that ordain women to the priesthood previously allowed them to be ordained only to the diaconate. The effect of this was the creation of a large and overwhelmingly female diaconate for a time, as most men proceeded to be ordained priest after a short time as a deacon.
Deacons, in some dioceses, can be granted licences to solemnise matrimony, usually under the instruction of their parish priest and bishop. They sometimes officiate at Benediction of the Blessed Sacrament in churches which have this service. Deacons are not permitted to preside at the Eucharist (but can lead worship with the distribution of already consecrated communion where this is permitted), absolve sins or pronounce a blessing. It is the prohibition against deacons pronouncing blessings that leads some to believe that deacons cannot solemnise matrimony.
Laity.
All baptised members of the church are called Christian faithful, truly equal in dignity and in the work to build the church. Some non-ordained people also have a formal public ministry, often on a full-time and long-term basis – such as lay readers (also known as readers), churchwardens, vergers and sextons. Other lay positions include acolytes (male or female, often children), lay eucharistic ministers (also known as chalice bearers) and lay eucharistic visitors (who deliver consecrated bread and wine to 'shut-ins' or members of the parish who are unable to leave home or hospital to attend the Eucharist). Lay people also serve on the parish altar guild (preparing the altar and caring for its candles, linens, flowers etc.), in the choir and as cantors, as ushers and greeters and on the church council (called the 'vestry' in some countries) which is the governing body of a parish.
Religious orders.
A small yet influential aspect of Anglicanism is its religious orders and communities. Shortly after the beginning of the Catholic Revival in the Church of England, there was a renewal of interest in re-establishing religious and monastic orders and communities. One of Henry VIII's earliest acts was their dissolution and seizure of their assets. In 1841 Marian Rebecca Hughes became the first woman to take the vows of religion in communion with the Province of Canterbury since the Reformation. In 1848, Priscilla Lydia Sellon became the superior of the Society of the Most Holy Trinity at Devonport, Plymouth, the first organised religious order. Sellon is called 'the restorer, after three centuries, of the religious life in the Church of England.' For the next one hundred years, religious orders for both men and women proliferated throughout the world, becoming a numerically small but disproportionately influential feature of global Anglicanism.
Anglican religious life at one time boasted hundreds of orders and communities, and thousands of religious. An important aspect of Anglican religious life is that most communities of both men and women lived their lives consecrated to God under the vows of poverty, chastity and obedience (or in Benedictine communities, Stability, Conversion of Life, and Obedience) by practicing a mixed life of reciting the full eight services of the Breviary in choir, along with a daily Eucharist, plus service to the poor. The mixed life, combining aspects of the contemplative orders and the active orders remains to this day a hallmark of Anglican religious life. Another distinctive feature of Anglican religious life is the existence of some mixed-gender communities.
Since the 1960s there has been a sharp decline in the number of professed religious in most parts of the Anglican Communion, especially in North America, Europe, and Australia. Many once large and international communities have been reduced to a single convent or monastery with memberships of elderly men or women. In the last few decades of the 20th century, novices have for most communities been few and far between. Some orders and communities have already become extinct. There are however, still thousands of Anglican religious working today in approximately 200 communities around the world, and religious life in many parts of the Communion – especially in developing nations – flourishes.
The most significant growth has been in the Melanesian countries of the Solomon Islands, Vanuatu and Papua New Guinea. The Melanesian Brotherhood, founded at Tabalia, Guadalcanal, in 1925 by Ini Kopuria, is now the largest Anglican Community in the world with over 450 brothers in the Solomon Islands, Vanuatu, Papua New Guinea, the Philippines and the United Kingdom. The Sisters of the Church, started by Mother Emily Ayckbowm in England in 1870, has more sisters in the Solomons than all their other communities. The Community of the Sisters of Melanesia, started in 1980 by Sister Nesta Tiboe, is a growing community of women throughout the Solomon Islands.
The Society of Saint Francis, founded as a union of various Franciscan orders in the 1920s, has experienced great growth in the Solomon Islands. Other communities of religious have been started by Anglicans in Papua New Guinea and in Vanuatu. Most Melanesian Anglican religious are in their early to mid-20s – vows may be temporary and it is generally assumed that brothers, at least, will leave and marry in due course – making the average age 40 to 50 years younger than their brothers and sisters in other countries. Growth of religious orders, especially for women, is marked in certain parts of Africa.
Worldwide distribution.
Anglicanism represents the third largest Christian communion in the world, after the Catholic Church and the Eastern Orthodox Churches. The number of Anglicans in the world is well over 85 million as of 2011. The 11 provinces in Africa saw explosive growth in the last two decades. They now include 36.7 million members, more Anglicans than there are in England. England remains the largest single Anglican province, with 26 million members. In most industrialised countries, church attendance has decreased since the 19th century. Anglicanism's presence in the rest of the world is due to large-scale emigration, the establishment of expatriate communities or the work of missionaries.
The Church of England has been a church of missionaries since the 17th century when the Church first left English shores with colonists who founded what would become the United States, Australia, Canada, New Zealand and South Africa and established Anglican churches. For example, an Anglican chaplain, Robert Wolfall, with Martin Frobisher's Arctic expedition celebrated the Eucharist in 1578 in Frobisher Bay.
The first Anglican church in the Americas was built at Jamestown, Virginia, in 1607. By the 18th century, missionaries worked to establish Anglican churches in Asia, Africa and Latin America. The great Church of England missionary societies were founded; for example the Society for Promoting Christian Knowledge (SPCK) in 1698. Society for the Propagation of the Gospel in Foreign Parts (SPG) in 1701, and the Church Mission Society (CMS) in 1799.
The 19th century saw the founding and expansion of social oriented evangelism with societies such as the Church Pastoral Aid Society (CPAS) in 1836, Mission to Seafarers in 1856, Mothers' Union in 1876 and Church Army in 1882 all carrying out a personal form of evangelism.
The 20th century saw the Church of England developing new forms of evangelism such as the Alpha course in 1990 which was developed and propagated from Holy Trinity Brompton Church in London. In the 21st century, there has been renewed effort to reach children and youth. Fresh expressions is a Church of England missionary initiative to youth begun in 2005, and has ministries at a skate park through the efforts of St George's Church, Benfleet, Essex – Diocese of Chelmsford – or youth groups with evocative names, like the C.L.A.W (Christ Little Angels – Whatever!) youth group at Coventry Cathedral. And for the unchurched who do not actually wish to visit a bricks and mortar church there are Internet ministries such as the Diocese of Oxford's online Anglican i-Church which appeared on the web in 2005.
Ecumenism.
Anglican interest in ecumenical dialogue can be traced back to the time of the Reformation and dialogues with both Orthodox and Lutheran churches in the 16th century. In the 19th century, with the rise of the Oxford Movement, there arose greater concern for reunion of the churches of 'Catholic confession.' This desire to work towards full communion with other denominations led to the development of the Chicago-Lambeth Quadrilateral, approved by the Third Lambeth Conference of 1888. The four points (the sufficiency of scripture, the historic creeds, the two dominical sacraments, and the historic episcopate) were proposed as a basis for discussion, although they have frequently been taken as a non-negotiable bottom-line for any form of reunion.
Theological diversity.
Anglicanism in general has always sought a balance between the emphases of Catholicism and Protestantism, while tolerating a range of expressions of evangelicalism and ceremony. Clergy and laity from all Anglican churchmanship traditions have been active in the formation of the Continuing movement.
While there are high church, broad church, and low church Continuing Anglicans, many Continuing churches are Anglo-Catholic with highly ceremonial liturgical practices. Others belong to a more Evangelical or low church tradition and tend to support the Thirty-nine Articles and simpler worship services. Morning Prayer, for instance, is often used instead of the Holy Eucharist for Sunday worship services, although this is not necessarily true of all low church parishes.
Most Continuing churches in the United States reject the 1979 revision of the Book of Common Prayer by the Episcopal Church and use the 1928 version for their services instead. In addition, Anglo-Catholic bodies may use the Anglican Missal or English Missal in celebrating the Eucharist.
Social activism.
Anglican concern with broader issues of social justice can be traced to its earliest divines. Richard Hooker, for instance, wrote that 'God hath created nothing simply for itself, but each thing in all things, and of every thing each part in other have such interest, that in the whole world nothing is found whereunto any thing created can say, 'I need thee not.'
This, and related statements, reflect the deep thread of incarnational theology running through Anglican social thought – a theology which sees God, nature, and humanity in dynamic interaction, and the interpenetration of the secular and the sacred in the make-up of the cosmos. Such theology is informed by a traditional English spiritual ethos, rooted in Celtic Christianity and reinforced by Anglicanism's origins as an established church, bound up by its structure in the life and interests of civil society.
Repeatedly, throughout Anglican history, this principle has reasserted itself in movements of social justice. For instance, in the 18th century the influential Evangelical Anglican William Wilberforce, along with others, campaigned against the slave trade. In the 19th century, the dominant issues concerned the adverse effects of industrialisation. The usual Anglican response was to focus on education and give support to 'The National Society for the Education of the Children of the Poor in the principles of the Church of England'.
Working conditions and Christian socialism.
Lord Shaftesbury, a devout Evangelical, campaigned to improve the conditions in factories, in mines, for chimney sweeps, and for the education of the very poor. For years he was chairman of the Ragged school Board. Frederick Denison Maurice was a leading figure advocating reform, founding so-called 'producer's co-operatives' and the Working Men's College. His work was instrumental in the establishment of the Christian socialist movement, although he himself was not in any real sense a socialist but, 'a Tory paternalist with the unusual desire to theories his acceptance of the traditional obligation to help the poor', influenced Anglo-Catholics such as Charles Gore, who wrote that, 'the principle of the incarnation is denied unless the Christian spirit can be allowed to concern itself with everything that interests and touches human life.' Anglican focus on labour issues culminated in the work of William Temple in the 1930s and 1940s.
Pacifism.
A question of whether or not Christianity is a pacifist religion has remained a matter of debate for Anglicans. In 1937, the Anglican Pacifist Fellowship emerged as a distinct reform organisation, seeking to make pacifism a clearly defined part of Anglican theology. The group rapidly gained popularity amongst Anglican intellectuals, including Vera Brittain, Evelyn Underhill and former British political leader George Lansbury. Furthermore, the Reverend Dick Sheppard, who during the 1930s was one of Britain's most famous Anglican priests due to his landmark sermon broadcasts for BBC radio, founded the Peace Pledge Union a secular pacifist organisation for the non-religious that gained considerable support throughout the 1930s.
Whilst never actively endorsed by the Anglican Church, many Anglicans unofficially have adopted the Augustinian 'Just War' doctrine. The Anglican Pacifist Fellowship remain highly active throughout the Anglican world. It rejects this doctrine of 'just war' and seeks to reform the Church by reintroducing the pacifism inherent in the beliefs of many of the earliest Christians and present in their interpretation of Christ's Sermon on the Mount. The principles of the Anglican Pacifist Fellowship are often formulated as a statement of belief that 'Jesus' teaching is incompatible with the waging of war, that a Christian church should never support or justify war and that our Christian witness should include opposing the waging or justifying of war.'
Confusing the matter was the fact that the 37th Article of Religion in the Book of Common Prayer states that 'it is lawful for Christian men, at the commandment of the Magistrate, to wear weapons, and serve in the wars.' Therefore, the Lambeth Council in the modern era has sought to provide a clearer position by repudiating modern war and developed a statement that has been affirmed at each subsequent meeting of the Council.
This statement was strongly reasserted when 'the 67th General Convention of the Episcopal Church reaffirms the statement made by the Anglican Bishops assembled at Lambeth in 1978 and adopted by the 66th General Convention of the Episcopal Church in 1979, calling 'Christian people everywhere .. to engage themselves in non-violent action for justice and peace and to support others so engaged, recognizing that such action will be controversial and may be personally very costly.. this General Convention, in obedience to this call, urges all members of this Church to support by prayer and by such other means as they deem appropriate, those who engaged in such non-violent action, and particularly those who suffer for conscience' sake as a result; and be it further Resolved, that this General Convention calls upon all members of this Church seriously to consider the implications for their own lives of this call to resist war and work for peace for their own lives.'
After World War II.
The focus on other social issues became increasingly diffuse after the Second World War. On the one hand, the growing independence and strength of Anglican churches in the global south brought new emphasis to issues of global poverty, the inequitable distribution of resources, and the lingering effects of colonialism. In this regard, figures such as Desmond Tutu and Ted Scott were instrumental in mobilizing Anglicans worldwide against the apartheid policies of South Africa. Rapid social change in the industrialised world during the 20th century compelled the church to examine issues of gender, sexuality and marriage.
Split within Anglicanism.
These changes led to Lambeth Conference resolutions countenancing contraception and the remarriage of divorced persons. They led to most provinces approving the ordination of women. In more recent years it has led some jurisdictions to permit the ordination of people in same-sex relationships and to authorise rites for the blessing of same-sex unions (see homosexuality and Anglicanism). More conservative elements within and outside of Anglicanism (primarily African churches and factions within North American Anglicanism) have opposed these proposals.
Some liberal and moderate Anglicans see this opposition as representing a new fundamentalism within Anglicanism. Others see the advocacy for these proposals as representing a breakdown of Christian theology and commitment. The lack of social consensus among and within provinces of diverse cultural traditions has resulted in considerable conflict and even schism concerning some or all of these developments (see Anglican realignment). Some Anglicans opposed to various liberalising changes, in particular the ordination of women, have converted to Roman Catholicism. Others have, at various times, joined the Continuing Anglican movement.
These latter trends reflect a countervailing tendency in Anglicanism towards insularity, reinforced perhaps by the 'big tent' nature of the movement, which seeks to be comprehensive of various views and tendencies. The insularity and complacency of the early established Church of England has tended to influence Anglican self-identity, and inhibit engagement with the broader society in favour of internal debate and dialogue. Nonetheless, there is significantly greater cohesion among Anglicans when they turn their attention outward. Anglicans worldwide are active in many areas of social and environmental concern.
'Continuing' churches.
The term 'Continuing Anglicanism' refers to a number of church bodies which have formed outside of the Anglican Communion in the belief that traditional forms of Anglican faith, worship and order have been unacceptably revised or abandoned within some Anglican Communion churches in recent decades. They therefore claim that they are 'continuing' traditional Anglicanism.
The modern Continuing Anglican movement principally dates to the Congress of St. Louis, held in the United States in 1977, at which participants rejected changes that had been made in the Episcopal Church's Book of Common Prayer and also the Episcopal Church's approval of the ordination of women to the priesthood. More recent changes in the North American churches of the Anglican Communion, such as the introduction of same-sex marriage rites and the ordination of gay and lesbian people to the priesthood and episcopate, have created further separations.
Continuing churches have generally been formed by people who have left the Anglican Communion. The original Anglican churches are charged by the Continuing Anglicans with being greatly compromised by secular cultural standards and liberal theology. Many Continuing Anglicans believe that the faith of some churches in communion with the Archbishop of Canterbury has become unorthodox and therefore have not sought to also be in communion with him.
The original generation of continuing parishes in the United States were found mainly in metropolitan areas. Since the late 1990s a number have appeared in smaller communities, often as a result of a division in the town's existing Episcopal churches. The 2007–08 'Directory of Traditional Anglican and Episcopal Parishes', published by the Fellowship of Concerned Churchmen, contained information on over 900 parishes affiliated with either the Continuing Anglican churches or the Anglican realignment movement, a more recent wave of Anglicans withdrawing from the Anglican Communion's North American provinces.
Ordinariates within the Roman Catholic Church.
On 4 November 2009, Pope Benedict XVI issued an apostolic constitution, 'Anglicanorum Coetibus', to allow groups of former Anglicans to enter into full communion with the Roman Catholic Church as members of personal ordinariates. The 20 October 2009 announcement of the imminent constitution mentioned:
For each personal ordinariate the ordinary may be a former Anglican bishop or priest. It is expected that provision will be made to allow the retention of aspects of Anglican liturgy; cf. Anglican Use.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1216'>
Athens
Athens (; , 'Athína', ; , 'Athēnai') is the capital and largest city of Greece. Athens dominates the Attica region and is one of the world's oldest cities, with its recorded history spanning around 3,400 years. Classical Athens, as a landlocked location was a powerful city-state that emerged in conjunction with the seagoing development of the port of Piraeus. A centre for the arts, learning and philosophy, home of Plato's Academy and Aristotle's Lyceum, it is widely referred to as the cradle of Western civilization and the birthplace of democracy, largely due to the impact of its cultural and political achievements during the 5th and 4th centuries BC on the rest of the then known European continent. Today a cosmopolitan metropolis, modern Athens is central to economic, financial, industrial, political and cultural life in Greece. In 2012, Athens was ranked the world's 39th richest city by purchasing power and the 77th most expensive in a UBS study.
The City of Athens is recognised as a global city because of its geo-strategic location and its importance in finance, commerce, media, entertainment, arts, international trade, culture, education and tourism. It is one of the biggest economic centres in southeastern Europe, with a large financial sector and features the largest passenger port in Europe and the third largest in the world. Athens has a population of 664,046 (in 2011, 796,442 in 2004) within its administrative limits and a land area of . The urban area of Athens (Greater Athens and Greater Piraeus) extends beyond its administrative municipal (City) limits, with a population of 3,074,160 (in 2011) over an area of . According to Eurostat in 2004, the Athens Larger Urban Zone (LUZ) was the 7th most populous LUZ in the European Union (the 5th most populous capital city of the EU), with a population of 4,013,368. Athens is also the southernmost capital on the European mainland.
The heritage of the classical era is still evident in the city, represented by ancient monuments and works of art, the most famous of all being the Parthenon, considered a key landmark of early Western civilization. The city also retains Roman and Byzantine monuments, as well as a smaller number of Ottoman monuments.
Athens is home to two UNESCO World Heritage Sites, the Acropolis of Athens and the medieval Daphni Monastery. Landmarks of the modern era, dating back to the establishment of Athens as the capital of the independent Greek state in 1834, include the Hellenic Parliament (19th century) and the Athens Trilogy, consisting of the National Library of Greece, the Athens University and the Academy of Athens. Athens was the host city of the first modern-day Olympic Games in 1896, and 108 years later it welcomed home the 2004 Summer Olympics. Athens is home to the National Archeological Museum, featuring the world's largest collection of ancient Greek antiquities, as well as the new Acropolis Museum.
Etymology.
In Ancient Greek Athens' name was ('Athēnai' ) in plural. However, in earlier Greek, such as Homeric Greek, the name was in the singular form, as ('Athēnē') and was then rendered in the plural, like those of ('Thēbai') and ('Μukēnai'). The root of the word is probably not of Greek or Indo-European origin, and is a possible remnant of the Pre-Greek substrate of Attica, as with the name of the goddess Athena (Attic 'Athēnā', Ionic 'Athēnē' and Doric 'Athānā'), who was always related to the city of Athens. During the medieval period the name of the city was rendered once again in the singular as . However, because of the conservatism of the written language, remained the official name of the city until the abandonment of Katharevousa in the 1970s, when Ἀθήνα became the official name.
Previously, there had been other etymologies by scholars of the 19th century. Lobeck proposed as the root of the name the word ('athos') or ('anthos') meaning flower, to denote Athens as the 'flowering' city. On the other hand, Döderlein proposed the stem of the verb , stem θη- ('thaō', stem 'thē-', 'to suck') to denote Athens as having fertile soil.
An etiological myth explaining how Athens has been acquired this name was well known among ancient Athenians and even became the theme of the sculpture on the West pediment of the Parthenon. The goddess Athena and the god Poseidon had many disagreements and battles between them, and one of these was a race to be the Patron God of the city. In an attempt to compel the people, Poseidon created a salt water spring by striking the ground with his trident, symbolizing naval power. However, when Athena created the olive tree, symbolizing peace and prosperity, the Athenians, under their ruler Cecrops, accepted the olive tree and named the city after Athena.
The city is sometimes referred in Greek as ', which means in English 'the glorious city', or simply as ' ('protevousa'), 'the capital'.
History.
The oldest known human presence in Athens is the Cave of Schist, which has been dated to between the 11th and 7th millennium BC. Athens has been continuously inhabited for at least 7000 years.
By 1400 BC the settlement had become an important centre of the Mycenaean civilization and the Acropolis was the site of a major Mycenaean fortress, whose remains can be recognised from sections of the characteristic Cyclopean walls. Unlike other Mycenaean centers, such as Mycenae and Pylos, it is not known whether Athens suffered destruction in about 1200 BC, an event often attributed to a Dorian invasion, and the Athenians always maintained that they were 'pure' Ionians with no Dorian element. However, Athens, like many other Bronze Age settlements, went into economic decline for around 150 years afterwards.
Iron Age burials, in the Kerameikos and other locations, are often richly provided for and demonstrate that from 900 BC onwards Athens was one of the leading centres of trade and prosperity in the region. The leading position of Athens may well have resulted from its central location in the Greek world, its secure stronghold on the Acropolis and its access to the sea, which gave it a natural advantage over inland rivals such as Thebes and Sparta.
By the 6th century BC, widespread social unrest led to the reforms of Solon. These would pave the way for the eventual introduction of democracy by Cleisthenes in 508 BC. Athens had by this time become a significant naval power with a large fleet, and helped the rebellion of the Ionian cities against Persian rule. In the ensuing Greco-Persian Wars Athens, together with Sparta, led the coalition of Greek states that repelled the Persians, defeating them decisively at Marathon in 490 BC, and crucially at Salamis in 480 BC.
The decades that followed became known as the Golden Age of Athenian democracy, during which time Athens became the leading city of Ancient Greece, with its cultural achievements laying the foundations of Western civilization. The playwrights Aeschylus, Sophocles and Euripides flourished in Athens during this time, as did the historians Herodotus and Thucydides, the physician Hippocrates, and the philosopher Socrates. Guided by Pericles, who promoted the arts and fostered democracy, Athens embarked on an ambitious building program that saw the construction of the Acropolis of Athens (including the Parthenon), as well as empire-building via the Delian League. Originally intended as an association of Greek city-states to continue the fight against the Persians, the league soon turned into a vehicle for Athens's own imperial ambitions. The resulting tensions brought about the Peloponnesian War (431–404 BC), in which Athens was defeated by its rival Sparta.
By the mid-4th century BC, the northern Greek kingdom of Macedon was becoming dominant in Athenian affairs. In 338 BC the armies of Philip II defeated an alliance of some of the Greek city-states including Athens and Thebes at the Battle of Chaeronea, effectively ending Athenian independence. Later, under Rome, Athens was given the status of a free city because of its widely admired schools. The Roman emperor Hadrian, in the 2nd century AD, constructed a library, a gymnasium, an aqueduct which is still in use, several temples and sanctuaries, a bridge and financed the completion of the Temple of Olympian Zeus.
By the end of Late Antiquity, the city experienced decline followed by recovery in the second half of the Middle Byzantine Period, in the 9th to 10th centuries AD, and was relatively prosperous during the Crusades, benefiting from Italian trade. After the Fourth Crusade the Duchy of Athens was established. In 1458 it was conquered by the Ottoman Empire and entered a long period of decline.
Following the Greek War of Independence and the establishment of the Greek Kingdom, Athens was chosen as the capital of the newly independent Greek state in 1834, largely due to historical and sentimental reasons. At the time it was a town of modest size built around the foot of the Acropolis. The first King of Greece, Otto of Bavaria, commissioned the architects Stamatios Kleanthis and Eduard Schaubert to design a modern city plan fit for the capital of a state.
The first modern city plan consisted of a triangle defined by the Acropolis, the ancient cemetery of Kerameikos and the new palace of the Bavarian king (now housing the Greek Parliament), so as to highlight the continuity between modern and ancient Athens. Neoclassicism, the international style of this epoch, was the architectural style through which Bavarian, French and Greek architects such as Hansen, Klenze, Boulanger or Kaftantzoglou designed the first important public buildings of the new capital. In 1896 Athens hosted the first modern Olympic Games. During the 1920s a number of Greek refugees, expelled from Asia Minor after the Greco-Turkish War (1919-1922), swelled Athens's population; nevertheless it was most particularly following World War II, and from the 1950s and 1960s, that the population of the city exploded, and Athens experienced a gradual expansion.
In the 1980s it became evident that smog from factories and an ever increasing fleet of automobiles, as well as a lack of adequate free space due to congestion, had evolved into the city's most important challenge. A series of anti-pollution measures taken by the city's authorities in the 1990s, combined with a substantial improvement of the city's infrastructure (including the Attiki Odos motorway, the expansion of the Athens Metro, and the new Athens International Airport), considerably alleviated pollution and transformed Athens into a much more functional city. In 2004 Athens hosted the 2004 Summer Olympics.
Geography.
Geology.
Athens sprawls across the central plain of Attica that is often referred to as the 'Athens or Attica Basin' (Greek: Λεκανοπέδιο Αττικής). The basin is bounded by four large mountains: Mount Aegaleo to the west, Mount Parnitha to the north, Mount Penteli to the northeast and Mount Hymettus to the east. Beyond Mount Aegaleo lies the Thriasian plain, which forms an extension of the central plain to the west. The Saronic Gulf lies to the southwest. Mount Parnitha is the tallest of the four mountains (), and has been declared a national park.
Athens is built around a number of hills. Lycabettus is one of the tallest hills of the city proper and provides a view of the entire Attica Basin. The geomorphology of Athens is deemed to be one of the most complex in the world due to its mountains causing a temperature inversion phenomenon which, along with the Greek Government's difficulties controlling industrial pollution, was responsible for the air pollution problems the city has faced. This issue is not unique to Athens; for instance, Los Angeles and Mexico City also suffer from similar geomorphology inversion problems.
Cephissus river, Ilisos and Eridanos stream are the historical rivers of Athens.
Climate.
Athens has a subtropical Mediterranean climate (Köppen 'Csa') and receives just enough annual precipitation to avoid Köppen's 'BSh' (semi-arid climate) classification. The dominant feature of Athens's climate is alternation between prolonged hot and dry summers and mild winters with moderate rainfall. With an average of of yearly precipitation, rainfall occurs largely between the months of October and April. July and August are the driest months, where thunderstorms occur sparsely once or twice a month. Winters are cool and rainy, with a January average of ; in Nea Filadelfeia and in Hellinikon; Snowstorms are infrequent but can cause disruption when they occur. Snowfalls are more frequent in the northern suburbs of the city.
The annual precipitation of Athens is typically lower than in other parts of Greece, mainly in western Greece. As an example, Ioannina receives around per year, and Agrinio around per year. Daily average highs for July (1955–2004) have been measured at at Nea Filadelfeia weather station, but other parts of the city may be even warmer, in particular its western areas partly due to industrialization and partly due to a number of natural factors, knowledge of which has been available from the mid-19th century. Temperatures often surpass during the city's notorious heatwaves.
Athens is affected by the urban heat island effect in some areas which is caused by human activity, altering its temperatures compared to the surrounding rural areas, and bearing detrimental effects on energy usage, expenditure for cooling, and health. The urban heat island of the city has also been found to be partially responsible for alterations of the climatological temperature time-series of specific Athens meteorological stations, due to its impact on the temperatures and the temperature trends recorded by some meteorological stations. On the other hand, specific meteorological stations, such as the National Garden station and Thiseio meteorological station, are less affected or do not experience the urban heat island.
Athens holds the World Meteorological Organization record for the highest temperature ever recorded in Europe, at , which was recorded in the Elefsina and Tatoi suburbs of Athens on 10 July 1977.
Administration.
Athens became the capital of Greece in 1834, following Nafplion, which was the provisional capital from 1829. The municipality (City) of Athens is also the capital of the Attica region. 'Athens' can refer either to the municipality of Athens, to Greater Athens, or to the entire Athens Urban Area.
Attica region.
The Athens Metropolitan Area, sprawling over , is located within the Attica region. The region encompasses the most populated region of Greece, reaching 3,827,624 inhabitants in 2011, while it is however one of the smallest regions in the country.
The Attica region itself is split into eight regional units, out of which the first four form Greater Athens, while the regional unit of Piraeus forms Greater Piraeus. Together they make up the contiguous built up Athens Urban Area, spanning over .
Until 2010, the first four regional units above also made up the abolished Athens Prefecture (what is referred to as Greater Athens), which was the most populous of the Prefectures of Greece at the time, accounting for 2,640,701 people (in 2011) within an area of .
Municipality (City) of Athens.
The municipality (City) of Athens is the most populous in Greece, with a population of 664,046 people (in 2011) and an area of , forming the core of the Athens Urban Area within the 'Attica Basin'. The current mayor of Athens is Giorgos Kaminis. The municipality is divided into seven municipal districts which are mainly used for administrative purposes.
Population data for the 7 municipal districts of Athens (2001 census):
1st: 97,570
2nd: 110,069
3rd: 48,305
4th: 87,672
5th: 95,234
6th: 147,181
7th: 159,483
For the Athenians the most popular way of dividing the City proper is through its neighbourhoods such as Pagkrati, Ambelokipi, Exarcheia, Patissia, Ilissia, Petralona, Koukaki and Kypseli, each with its own distinct history and characteristics.
The Athens municipality also forms the core and center of Greater Athens which consists of the Athens municipality and 34 more municipalities, which are divided in the four regional units (North, West, Central and South Athens) mentioned above.
The municipalities of Greater Athens along with the municipalities within Greater Piraeus (regional unit of Piraeus) form the Athens Urban Area, while the larger metropolitan area includes several additional suburbs and towns surrounding the dense urban area of the Greek capital.
Cityscape.
Architecture.
Athens incorporates architectural styles ranging from Greco-Roman and Neoclassical to modern. They are often to be found in the same areas, as Athens is not marked by a uniformity of architectural style.
For the most part of the 19th century Neoclassicism dominated Athens as well as some deviations from it such as Eclecticism, especially in the early 20th century. Thus, the Hellenic Parliament was the first important public building to be built, between 1836 and 1843. Later in the mid and late 19th century, Theophil Freiherr von Hansen and Ernst Ziller took part in the construction of many neoclassical buildings such as the Athens Academy and the Zappeion Hall. Ziller also designed many private mansions in the centre of Athens which gradually became public, usually through donations, such as Schliemann's Iliou Melathron.
Beginning in the 1920s, Modern architecture including Bauhaus and Art Deco began to exert an influence on almost all Greek architects, and buildings both public and private were constructed in accordance with these styles. Localities with a great number of such buildings include Kolonaki, and some areas of the centre of the city; neighbourhoods developed in this period include Kypseli.
In the 1950s and 1960s during the extension and development of Athens, other modern movements such as the International style played an important role. The centre of Athens was largely rebuilt, leading to the demolition of a number of neoclassical buildings. The architects of this era employed materials such as glass, marble and aluminium, and many blended modern and classical elements. After World War II, internationally known architects to have designed and built in the city included Walter Gropius, with his design for the US Embassy, and, among others, Eero Saarinen, in his postwar design for the east terminal of the Ellinikon Airport.
Notable Greek architects of the 1930s–1960s included Konstantinos Doxiadis, Dimitris Pikionis, Pericles A. Sakellarios, Aris Konstantinidis, and others.
City of Athens neighbourhoods.
The municipality of Athens, the city centre of the Athens Urban Area, is divided into several districts: Omonoia, Syntagma, Exarcheia, Agios Nikolaos, Neapolis, Lykavittos, Lofos Strefi, Lofos Finopoulou, Lofos Filopappou, Pedion Areos, Metaxourgeio, Aghios Kostantinos, Larissa Station, Kerameikos, Psiri, Monastiraki, Gazi, Thission, Kapnikarea, Aghia Irini, Aerides, Anafiotika, Plaka, Acropolis, Pnyka, Makrygianni, Lofos Ardittou, Zappeion, Aghios Spyridon, Pangration, Kolonaki, Dexameni, Evaggelismos, Gouva, Aghios Ioannis, Neos Kosmos, Koukaki, Kynosargous, Fix, Ano Petralona, Kato Petralona, Rouf, Votanikos, Profitis Daniil, Akadimia Platonos, Kolonos, Kolokynthou, Attikis Square, Lofos Skouze, Sepolia, Kypseli, Aghios Meletios, Nea Kypseli, Gyzi, Polygono, Ampelokipoi, Panormou-Gerokomeio, Pentagono, Ellinorosson, Nea Filothei, Ano Kypseli, Tourkovounia-Lofos Patatsou, Lofos Elikonos, Koliatsou, Thymarakia, Kato Patisia, Treis Gefyres, Aghios Eleftherios, Ano Patisia, Kypriadou, Prompona, Aghios Panteleimonas, Pangrati, Goudi and Ilisia.
The Gazi () area, one of the latest in full redevelopment, is located around a historic gas factory, now converted into the 'Technopolis' cultural multiplex, and also includes artists' areas, small clubs, bars and restaurants, as well as Athens's 'Gay Village'. The metro's expansion to the western suburbs of the city has brought easier access to the area since spring 2007, as the blue line now stops at Gazi (Kerameikos station).
Urban and suburban municipalities.
The Athens Metropolitan Area consists of 58 densely populated municipalities, sprawling around the municipality of Athens (the city centre) in virtually all directions. For the Athenians, all the urban municipalities surrounding the city centre are called suburbs. According to their geographic location in relation to the City of Athens, the suburbs are divided into four zones; the northern suburbs (including Agios Stefanos, Dionysos, Ekali, Nea Erythraia, Kifissia, Maroussi, Pefki, Lykovrysi, Metamorfosi, Nea Ionia, Nea Filadelfeia, Irakleio, Vrilissia, Melissia, Penteli, Chalandri, Agia Paraskevi, Galatsi, Psychiko and Filothei); the southern suburbs (including Alimos, Nea Smyrni, Moschato, Kallithea, Agios Dimitrios, Palaio Faliro, Elliniko, Glyfada, Argyroupoli, Ilioupoli, Voula and Vouliagmeni); the eastern suburbs (including Zografou, Dafni, Vyronas, Kaisariani, Cholargos and Papagou); and the western suburbs (including Peristeri, Ilion, Egaleo, Agia Varvara, Chaidari, Petroupoli, Agioi Anargyroi and Kamatero).
The Athens city coastline, extending from the major commercial port of Piraeus to the southernmost suburb of Varkiza for some , is also connected to the city centre by a tram.
In the northern suburb of Maroussi, the upgraded main Olympic Complex (known by its Greek acronym OAKA) dominates the skyline. The area has been redeveloped according to a design by the Spanish architect Santiago Calatrava, with steel arches, landscaped gardens, fountains, futuristic glass, and a landmark new blue glass roof which was added to the main stadium. A second Olympic complex, next to the sea at the beach of Palaio Faliro, also features modern stadia, shops and an elevated esplanade. Work is underway to transform the grounds of the old Athens Airport – named Elliniko – in the southern suburbs, into one of the largest landscaped parks in Europe, to be named the Hellenikon Metropolitan Park.
Many of the southern suburbs (such as Alimos, Palaio Faliro, Elliniko, Voula, Vouliagmeni and Varkiza) host a number of sandy beaches, most of which are operated by the Greek National Tourism Organisation and require an entrance fee. Casinos operate on both Mount Parnitha, some from downtown Athens (accessible by car or cable car), and the nearby town of Loutraki (accessible by car via the Athens – Corinth National Highway, or the suburban rail service Proastiakos).
Parks and zoos.
Parnitha National Park is punctuated by well-marked paths, gorges, springs, torrents and caves dotting the protected area. Hiking and mountain-biking in all four mountains are popular outdoor activities for residents of the city. The National Garden of Athens was completed in 1840 and is a green refuge of 15.5 hectares in the centre of the Greek capital. It is to be found between the Parliament and Zappeion buildings, the latter of which maintains its own garden of seven hectares.
Parts of the city centre have been redeveloped under a masterplan called the 'Unification of Archeological Sites of Athens', which has also gathered funding from the EU to help enhance the project. The landmark Dionysiou Areopagitou Street has been pedestrianised, forming a scenic route. The route starts from the Temple of Olympian Zeus at Vasilissis Olgas Avenue, continues under the southern slopes of the Acropolis near Plaka, and finishes just beyond the Temple of Hephaestus in Thiseio. The route in its entirety provides visitors with views of the Parthenon and the Agora (the meeting point of ancient Athenians), away from the busy city centre.
The hills of Athens also provide green space. Lycabettus, Philopappos hill and the area around it, including Pnyx and Ardettos hill, are planted with pines and other trees, with the character of a small forest rather than typical metropolitan parkland. Also to be found is the Pedion tou Areos ('Field of Mars') of 27.7 hectares, near the National Archaeological Museum.
Athens' largest zoo is the Attica Zoological Park, a 20-hectare (49-acre) private zoo located in the suburb of Spata. The zoo is home to around 2000 animals representing 400 species, and is open 365 days a year. Smaller zoos exist within public gardens or parks, such as the zoo within the National Garden of Athens.
Economy.
Athens is the financial capital of Greece, and multinational companies such as Ericsson, Siemens, Motorola and Coca-Cola have their regional research and development headquarters there.
Demographics.
Mycenean Athens in 1600–1100 BC could have reached the size of Tiryns; that would put the population at the range of 10,000 - 15,000. During the Greek Dark Ages the population of Athens was around 4,000 people. In 700 BC the population grew to 10,000. In 500 BC the area probably contained 200,000 people. During the classical period the city's population is estimated from 150,000 - 350,000 and up to 610,000 according to Thucydides. When Demetrius of Phalerum conducted a population census in 317 BC the population was 21,000 free citizens, plus 10,000 resident aliens and 400,000 slaves. This suggests a total population of 431,000.
The municipality of Athens has an official population of 664,046 people. The four regional units that make up what is referred to as Greater Athens have a combined population of 2,640,701. They together with the regional unit of Piraeus (Greater Piraeus) make up the dense Athens Urban Area which reaches a total population of 3,074,160 inhabitants (in 2011).
The ancient site of Athens is centred on the rocky hill of the acropolis. In ancient times the port of Piraeus was a separate city, but it has now been absorbed into the Athens Urban Area. The rapid expansion of the city, which continues to this day, was initiated in the 1950s and 1960s, because of Greece's transition from an agricultural to an industrial nation. The expansion is now particularly toward the East and North East (a tendency greatly related to the new Eleftherios Venizelos International Airport and the Attiki Odos, the freeway that cuts across Attica). By this process Athens has engulfed many former suburbs and villages in Attica, and continues to do so. The table below shows the historical population of Athens in recent times.
Details.
The large City Centre of the Greek capital falls directly within the municipality of Athens, which is the largest in population size in Greece. Piraeus also forms a significant city centre on its own, within the Athens Urban Area and being the second largest in population size within it, with Peristeri and Kallithea following.
The Athens Urban Area today consists of 40 municipalities, 35 of which make up what is referred to as the Greater Athens municipalities, located within 4 regional units (North Athens, West Athens, Central Athens, South Athens); and a further 5, which make up the Greater Piraeus municipalities, located within the regional unit of Piraeus as mentioned above. The densely built up urban area of the Greek capital sprawls across throughout the 'Attica Basin' and has a total population of 3,074,160 (in 2011).
The Athens Metropolitan Area spans within the Attica region and includes a total of 58 municipalities, which are organized in 7 regional units (those outlined above, along with East Attica and West Attica), having reached a population of 3,737,550 based on the preliminary results of the 2011 census. Athens and Piraeus municipalities serve as the two metropolitan centres of the Athens Metropolitan Area. There are also some inter-municipal centres serving specific areas. For example Kifissia and Glyfada serve as inter-municipal centres for northern and southern suburbs respectively.
Culture and contemporary life.
Archaeological hub.
The city is a world centre of archaeological research. Apart from national institutions, such as Athens University, the Archaeological Society, several archaeological Museums, including the National Archaeological Museum, the Cycladic Museum, the Epigraphic Museum, the Byzantine Museum, as well as museums at the ancient Agora, Acropolis, and Kerameikos, the city is also home to the Demokritos laboratory for Archaeometry, alongside regional and national archaeological authorities that form part of the Greek Department of Culture.
Athens hosts 17 Foreign Archaeological Institutes which promote and facilitate research by scholars from their home countries. As a result, Athens has more than a dozen archaeological libraries and three specialized archaeological laboratories, and is the venue of several hundred specialized lectures, conferences and seminars, as well as dozens of archaeological exhibitions, each year. At any given time, hundreds of international scholars and researchers in all disciplines of archaeology are to be found in the city.
Museums.
Athens' most important museums include:
Tourism.
Athens has been a destination for travellers since antiquity. Over the past decade, the city's infrastructure and social amenities have improved, in part due to its successful bid to stage the 2004 Olympic Games. The Greek Government, aided by the EU, has funded major infrastructure projects such as the state-of-the-art Eleftherios Venizelos International Airport, the expansion of the Athens Metro system, and the new Attiki Odos Motorway.
Entertainment and performing arts.
Athens is home to 148 theatrical stages, more than any other city in the world, including the ancient Odeon of Herodes Atticus, home to the Athens Festival, which runs from May to October each year. In addition to a large number of multiplexes, Athens plays host to open air garden cinemas. The city also supports music venues, including the Athens Concert Hall ('Megaron Moussikis'), which attracts world class artists. The Athens Planetarium, located in Andrea Syngrou Avenue, is one of the largest and best equipped digital planetaria in the world.
Sports.
Athens has a long tradition in sports and sporting events, serving as home to the most important clubs in Greek sport and housing a large number of sports facilities. The city has also been host to sports events of international importance.
Athens has hosted the Summer Olympic Games twice, in 1896 and 2004. The 2004 Summer Olympics required the development of the Athens Olympic Stadium, which has since gained a reputation as one of the most beautiful stadiums in the world, and one of its most interesting modern monuments. The biggest stadium in the country, it hosted two finals of the UEFA Champions League, in 1994 and 2007. Athens' other major stadium, located in the Piraeus area, is the Karaiskakis Stadium, a sports and entertainment complex, host of the 1971 UEFA Cup Winners' Cup Final. In 2004 Greece's national soccer team won the UEFA Cup Finals in Portugal. In the final tie they beat the host nation Portugal 1:0.
Athens has hosted the Euroleague final three times, the first in 1985 and second in 1993, both at the Peace and Friendship Stadium, most known as SEF, a large indoor arena, and the third time in 2007 at the Olympic Indoor Hall. Events in other sports such as athletics, volleyball, water polo etc., have been hosted in the capital's venues.
Athens is home to three European multi-sport clubs: Olympiacos, Panathinaikos, AEK Athens. In football, Olympiacos have dominated the domestic competitions, Panathinaikos made it to the 1971 European Cup Final, while AEK Athens is the other member of the big three. These clubs also have basketball teams; Panathinaikos and Olympiacos are among the top powers in European basketball, having won the Euroleague six times and three respectively, whilst AEK Athens was the first Greek team to win a European trophy in any team sport.
Other notable clubs within Athens are Panionios, Atromitos, Apollon, Panellinios, Ethnikos Piraeus, Maroussi BC and Peristeri B.C. Athenian clubs have also had domestic and international success in other sports.
The Athens area encompasses a variety of terrain, notably hills and mountains rising around the city, and the capital is the only major city in Europe to be bisected by a mountain range. Four mountain ranges extend into city boundaries and thousands of miles of trails criss-cross the city and neighbouring areas, providing exercise and wilderness access on foot and bike.
Beyond Athens and across the prefecture of Attica, outdoor activities include skiing, rock climbing, hang gliding and windsurfing. Numerous outdoor clubs serve these sports, including the Athens Chapter of the Sierra Club, which leads over 4,000 outings annually in the area.
Music.
The most successful songs during the period 1870–1930 were the so-called Athenian serenades (Αθηναϊκές καντάδες), based on the Heptanesean kantádhes (καντάδες 'serenades'; sing.: καντάδα) and the songs performed on stage (επιθεωρησιακά τραγούδια 'theatrical revue songs') in revues, musical comedies, operettas and nocturnes that were dominating Athens' theatre scene.
Notable composers of operettas or nocturnes were Kostas Giannidis, Dionysios Lavrangas, Nikos Hatziapostolou, while Theophrastos Sakellaridis' 'The Godson' remains probably the most popular operetta. Despite the fact that the Athenian songs were not autonomous artistic creations (in contrast with the serenades) and despite their original connection with mainly dramatic forms of Art, they eventually became hits as independent songs. Notable actors of Greek operettas, who made also a series of melodies and songs popular at that time, include Orestis Makris, Kalouta sisters, Vasilis Avlonitis, Afroditi Laoutari, Eleni Papadaki, Marika Nezer, Marika Krevata and others. After 1930, wavering among American and European musical influences as well as the Greek musical tradition. Greek composers begin to write music using the tunes of the tango, waltz, swing, foxtrot, some times combined with melodies in the style of Athenian serenades' repertory. Nikos Gounaris was probably the most renowned composer and singer of the time.
In 1923, after the population exchange between Greece and Turkey, many ethnic Greeks from Asia Minor fled to Athens as a result of the Greco-Turkish War. They settled in poor neighborhoods and brought with them Rebetiko music, making it popular also in Greece, which became later the base for the Laïko music. Other forms of song popular today in Greece are elafrolaika, entechno, dimotika, and skyladika. Greece's most notable, and internationally famous, composers of Greek song, mainly of the entechno form, are Manos Hadjidakis and Mikis Theodorakis. Both composers have achieved fame in the west for their composition of film scores.
Education.
Located on Panepistimiou Street, the old campus of the University of Athens, the National Library, and the Athens Academy form the 'Athens Trilogy' built in the mid-19th century. Most of the university's workings have been moved to a much larger, modern campus located in the eastern suburb of Zografou. The second higher education institution in the city is the Athens Polytechnic School, found in Patission Street. This was the location where on 17 November 1973, more than 13 students were killed and hundreds injured inside the university during the Athens Polytechnic uprising, against the military junta that ruled the nation from 21 April 1967 until 23 July 1974.
Other universities that lie within Athens are the Athens University of Economics and Business, the Panteion University, the Agricultural University of Athens and the University of Piraeus. There are overall eleven state-supported Institutions of Higher (or Tertiary) education located in the Metropolitan Area of Athens, these are by chronological order: Athens School of Fine Arts (1837), National Technical University of Athens (1837), National and Kapodistrian University of Athens (1837), Agricultural University of Athens (1920), Athens University of Economics and Business (1920), Panteion University of Social and Political Sciences (1927), University of Piraeus (1938), Technological Educational Institute of Piraeus (1976), Technological Educational Institute of Athens (1983), Harokopio University (1990), School of Pedagogical and Technological Education (2002). There are also several other private 'colleges', as they called formally in Greece, as the establishment of private universities is prohibited by the constitution. Many of them are accredited by a foreign state or university such as the American College of Greece and the Athens Campus of the University of Indianapolis.
Environment.
By the late 1970s, the pollution of Athens had become so destructive that according to the then Greek Minister of Culture, Constantine Trypanis, '..the carved details on the five the caryatids of the Erechtheum had seriously degenerated, while the face of the horseman on the Parthenon's west side was all but obliterated.' A series of measures taken by the authorities of the city throughout the 1990s resulted in the improvement of air quality; the appearance of smog (or 'nefos' as the Athenians used to call it) has become less common.
Measures taken by the Greek authorities throughout the 1990s have improved the quality of air over the Attica Basin. Nevertheless, air pollution still remains an issue for Athens, particularly during the hottest summer days. In late June 2007, the Attica region experienced a number of brush fires, including a blaze that burned a significant portion of a large forested national park in Mount Parnitha, considered critical to maintaining a better air quality in Athens all year round. Damage to the park has led to worries over a stalling in the improvement of air quality in the city.
The major waste management efforts undertaken in the last decade (particularly the plant built on the small island of Psytalia) have improved water quality in the Saronic Gulf, and the coastal waters of Athens are now accessible again to swimmers. In January 2007, Athens faced a waste management problem when its landfill near Ano Liosia, an Athenian suburb, reached capacity. The crisis eased by mid-January when authorities began taking the garbage to a temporary landfill.
Transport.
Athens is serviced by a variety of transportation means, forming the largest mass transit system of Greece. The Athens Mass Transit System consists of a large bus fleet, a trolleybus fleet that mainly serves Athens's city center, the city's Metro, a commuter rail service and a tram network, connecting the southern suburbs to the city centre.
Bus transport.
Ethel () (Etaireia Thermikon Leoforeion), or 'Thermal Bus Company', is the main operator of buses in Athens. Its network consists of about 300 bus lines which span the Athens Metropolitan Area, with an operating staff of 5,327, and a fleet of 1,839 buses. Of those 1,839 buses 416 run on compressed natural gas, making up the largest fleet of natural gas-powered buses in Europe.
Besides being served by a fleet of natural-gas and diesel buses, the Athens Urban Area is also served by trolleybuses — or electric buses, as they are referred to in the name of the operating company. The network is operated by 'Electric Buses of the Athens and Piraeus Region', or ILPAP () and consists of 22 lines with an operating staff of 1,137. All of the 366 trolleybuses are equipped to enable them to run on diesel in case of power failure.
International and regional bus links are provided by KTEL from two InterCity Bus Terminals, Kifissos Bus Terminal A and Liosion Bus Terminal B, both located in the north-western part of the city. 'Kifissos' provides connections towards the Peloponnese and Attica, whereas 'Liosion' is used for most northerly mainland destinations.
Athens Metro.
The Athens Metro is more commonly known in Greece as the Attiko Metro () and provides public transport throughout the Athens Urban Area. While its main purpose is transport, it also houses Greek artifacts found during construction of the system. The Athens Metro has an operating staff of 387 and runs two of the three metro lines; namely the Red (line 2) and Blue (line 3) lines, which were constructed largely during the 1990s, with the initial sections opened in January 2000. All routes run entirely underground and a fleet of 42 trains consisting of 252 cars operate within the network, with a daily occupancy of 550,000 passengers.
The Red Line (line 2) runs from Anthoupoli station to Elliniko station and covers a distance of . The line connects the western suburbs of Athens with the southeast suburbs passing through the center of Athens. The line associated with Green (line 1) stations at Attiki and Omonoia Square station. Also the line connected with the Blue (line 3) at Syntagma Square station and connected with Tram at Syntagma Square, Sygrou-Fix and Agios Ioannis station.
The Blue Line (line 3) runs from the western suburbs, namely Agia Marina to the Egaleo station, through the central Monastiraki and Syntagma stations to Doukissis Plakentias avenue in the northeastern suburb of Halandri, covering a distance of , then ascending to ground level and reaching Eleftherios Venizelos International Airport, using the Suburban Railway infrastructure and extending its length to . The spring 2007 extension from Monastiraki westwards, to Egaleo, connected some of the main night life hubs of the city, namely the ones of Gazi (Kerameikos station) with Psirri (Monastiraki station) and the city centre (Syntagma station). Extensions are under construction to the west southwest suburbs of Athens, reaching to the port and the center of Piraeus. The new stations will be Agia Barvara, Koridallos, Nikaia, Maniatika, Piraeus and Dimotiko Theatro station. The stations will be ready in 2017, connecting the biggest port of Greece Piraeus Port with the biggest airport of Greece the Athens International Airport Elefterios Venizelos.
Electric railway (ISAP).
Not run by the Athens Metro company, is the ISAP (), the 'Electric Railway Company' line, which for many years served as Athens's primary urban rail transport. This is today the Green Line (line 1) of the Athens Metro network as shown on maps, and unlike the red and blue lines, ISAP has many above-ground sections on its route. This was the original metro line from Piraeus to Kifisia; serving 22 stations, with a network length of , an operating staff of 730 and a fleet of 44 trains and 243 cars. ISAP's occupancy rate is 600,000 passengers daily.
The Green Line (line 1) now serves 24 stations, and forms the oldest line of the Athens metro network and for the most part runs at ground level, connecting the port of Piraeus with the northern suburb of Kifissia. The line is set to be extended to Agios Stefanos, a suburb located to the north of Athens, reaching to .
The Athens Metropolitan Railway system is managed by three companies; namely ISAP (line 1), Attiko Metro (lines 2 & 3), while its commuter rail, the Proastiakós is considered as line 4.
Commuter/suburban rail (Proastiakos).
The Athens commuter rail service, referred to as the 'Proastiakós', connects Eleftherios Venizelos International Airport to the city of Corinth, west of Athens, via Larissa station, the city's central rail station and the port of Piraeus. The service is sometimes considered the fourth line of the Athens Metro. The length of Athens's commuter rail network extends to , and is expected to stretch to by 2010. The Proastiakos will be extended to Xylokastro west of Athens and Chalkida.
Tram.
Athens Tram SA operates a fleet of 35 vehicles, called 'Sirios', which serve 48 stations, employ 345 people with an average daily occupancy of 65,000 passengers. The tram network spans a total length of and covers ten Athenian suburbs. The network runs from Syntagma Square to the southwestern suburb of Palaio Faliro, where the line splits in two branches; the first runs along the Athens coastline toward the southern suburb of Voula, while the other heads toward the Piraeus district of Neo Faliro. The network covers the majority of the Saronic coastline. Further extensions are planned towards the major commercial port of Piraeus. The expansion to Piraeus will include 12 new stations, increase the overall length of tram route by , and increase the overall transportation network.
Eleftherios Venizelos International Airport.
Athens is served by the Eleftherios Venizelos International Airport (AIA) located near the town of Spata, in the eastern Messoghia plain, some east of Athens. The airport, awarded the 'European Airport of the Year 2004' Award, is intended as an expandable hub for air travel in southeastern Europe and was constructed in 51 months, costing 2.2 billion euros. It employs a staff of 14,000.
The airport is served by the metro, the suburban rail, buses to Piraeus port, Athens' city centre and its suburbs, and also taxis. Eleftherios Venizelos International Airport accommodates 65 landings and take-offs per hour, with its 24 passenger boarding bridges, 144 check-in counters and broader main terminal; and a commercial area of which includes cafes, duty-free shops, and a small museum.
In 2007, the airport handled 16,538,390 passengers, an increase of 9.7% over the previous year of 2006. Of those 16,538,390 passengers, 5,955,387 passed through the airport for domestic flights, and 10,583,003 passengers travelled through for international flights. Beyond the dimensions of its passenger capacity, AIA handled 205,294 total flights in 2007, or approximately 562 flights per day.
Railways and ferry connections.
Athens is the hub of the country's national railway system (OSE), connecting the capital with major cities across Greece and abroad (Istanbul, Sofia and Bucharest). Due to financial difficulties, all international rail services were suspended indefinitely in 2011. The Port of Piraeus connects Athens to the numerous Greek islands of the Aegean Sea, with ferries departing, while also serving the cruise ships that arrive.
Motorways.
Two main motorways of Greece begin in Athens, namely the A1/E75, which crosses through Athens's Urban Area from Piraeus, heading north towards Greece's second largest city, Thessaloniki; and the A8/E94 heading west, towards Patras, which incorporated the GR-8A. Before their completion much of the road traffic used the GR-1 and the GR-8.
Athens' Metropolitan Area is served by the motorway network of the Attiki Odos toll-motorway (code: A6). Its main section extends from the western industrial suburb of Elefsina to Athens International Airport; while two beltways, namely the Aigaleo Beltway (A65) and the Hymettus Beltway (A64) serve parts of western and eastern Athens respectively. The span of the Attiki Odos in all its length is , making it the largest metropolitan motorway network in all of Greece.
Olympic Games.
1896 Summer Olympics.
1896 brought forth the revival of the modern Olympic Games, by Frenchman Pierre de Coubertin. Thanks to his efforts, Athens was awarded the first modern Olympic Games. In 1896, the city had a population of 123,000 and the event helped boost the city's international profile. Of the venues used for these Olympics, the Kallimarmaro Stadium, and Zappeion were most crucial. The Kallimarmaro is a replica of the ancient Athenian stadiums, and the only major stadium (in its capacity of 60,000) to be made entirely of white marble from Mount Penteli, the same material used for construction of the Parthenon.
1906 Summer Olympics.
The 1906 Summer Olympics, or the 1906 Intercalated games, were held in Athens. The intercalated competitions were intermediate games to the internationally organized Olympics, and were meant to be organized in Greece every four years, between the main Olympics. This idea later lost support from the IOC and these games were discontinued.
2004 Summer Olympics.
Athens was awarded the 2004 Summer Olympics on 5 September 1997 in Lausanne, Switzerland, after having lost a previous bid to host the 1996 Summer Olympics, to Atlanta, United States. It was to be the second time Athens would host the games, following the inaugural event of 1896. After an unsuccessful bid in 1990, the 1997 bid was radically improved, including an appeal to Greece's Olympic history. In the last round of voting, Athens defeated Rome with 66 votes to 41. Prior to this round, the cities of Buenos Aires, Stockholm and Cape Town had been eliminated from competition, having received fewer votes.
During the first three years of preparations, the International Olympic Committee had expressed concern over the speed of construction progress for some of the new Olympic venues. In 2000 the Organising Committee's president was replaced by Gianna Angelopoulos-Daskalaki, who was the president of the original Bidding Committee in 1997. From that point forward, preparations continued at a highly accelerated, almost frenzied pace.
Although the heavy cost was criticized, estimated at $1.5 billion, Athens was transformed into a more functional city that enjoys modern technology both in transportation and in modern urban development. Some of the finest sporting venues in the world were created in the city, all of which were fully ready for the games. The games welcomed over 10,000 athletes from all 202 countries.
The 2004 Games were judged a success, as both security and organization worked well, and only a few visitors reported minor problems mainly concerning accommodation issues. The 2004 Olympic Games were described as 'Unforgettable, dream Games', by IOC President Jacques Rogge for their return to the birthplace of the Olympics, and for meeting the challenges of holding the Olympic Games. The only observable problem was a somewhat sparse attendance of some early events. Eventually, however, a total of more than 3.5 million tickets were sold, which was higher than any other Olympics with the exception of Sydney (more than 5 million tickets were sold there in 2000).
In 2008 it was reported that most of the Olympic venues had fallen into disrepair: according to those reports, 21 of the 22 facilities built for the games had either been left abandoned or are in a state of dereliction, with several squatter camps having sprung up around certain facilities, and a number of venues afflicted by vandalism, graffiti or strewn with rubbish. These claims, however, are disputed and likely to be inaccurate, as most of the facilities used for the Athens Olympics are either in use or in the process of being converted for post-Olympics use. The Greek Government has created a corporation, Olympic Properties SA, which is overseeing the post-Olympics management, development and conversion of these facilities, some of which will be sold off (or have already been sold off) to the private sector, while other facilities are still in use just as during the Olympics, or have been converted for commercial use or modified for other sports. Concerts and theatrical shows like those of the troupe Cirque du Soleil have recently been held in the complex.
Special Olympics.
The 2011 Special Olympics World Summer Games was held from June, 25th 2011 – July, 4th 2011 in Athens, Greece. The opening ceremony of the games took place on 25 June 2011 at the Panathinaiko Stadium and the closing ceremony was held on 4 July 2011.
Over 7,500 athletes, from 185 countries, competed in a total of twenty-two sports
International relations.
Twin towns – sister cities.
Athens is twinned with:
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1217'>
Anguilla
Anguilla ( ) is a British overseas territory in the Caribbean. It is one of the most northerly of the Leeward Islands in the Lesser Antilles, lying east of Puerto Rico and the Virgin Islands and directly north of Saint Martin. The territory consists of the main island of Anguilla itself, approximately 16 miles (26 km) long by 3 miles (5 km) wide at its widest point, together with a number of much smaller islands and cays with no permanent population. The island's capital is The Valley. The total land area of the territory is 35 square miles (90 km2), with a population of approximately 13,500 (2006 estimate).
Anguilla has become a popular tax haven, having no capital gains, estate, profit or other forms of direct taxation on either individuals or corporations. In April 2011, faced with a mounting deficit, it introduced a 3% 'Interim Stabilisation Levy', Anguilla's first form of income tax.
Etymology.
The name Anguilla derives from the word for 'eel' in any of various Romance languages (modern Spanish: '; French: '; Italian: '; Portuguese: '; Romanian '; Catalan: '; Galician: ' '), probably chosen because of the island's eel-like shape.
History.
Anguilla was first settled by Amerindian tribes who migrated from South America. The earliest Native American artefacts found on Anguilla have been dated to around 1300 BC, and remains of settlements date from 600 AD. The date of European discovery is uncertain: some sources claim that Columbus sighted the island in 1493, while others state that the island was first discovered by the French in 1564 or 1565.
Anguilla was first colonised by English settlers from Saint Kitts, beginning in 1650. The French temporarily took over the island in 1666 but under the Treaty of Breda it was returned to English control. In this early colonial period Anguilla sometimes served as a place of refuge. A Major John Scott who visited in September 1667 wrote of leaving the island 'in good condition' and noted that in July 1668 '200 or 300 people fled thither in time of war.' Other early arrivals included Europeans from Antigua & Barbuda and Barbados.
It is likely that some of these early Europeans brought enslaved Africans with them. Historians confirm that African slaves lived in the region in the early 17th century. For example, Africans from Senegal lived in St. Christopher (today St. Kitts) in 1626. By 1672 a slave depot existed on the island of Nevis, serving the Leeward Islands. While the time of African arrival in Anguilla is difficult to place precisely, archival evidence indicates a substantial African presence (at least 100) on the island by 1683.
While traditional histories of the region assume that the English were the first settlers of Anguilla under British rule, recent scholarship focused on Anguilla offers a different view. It places more significance on early sociocultural diversity. The research suggested that St. Christopher, Barbados, Nevis and Antigua may have been important points of origin. Regarding African origins, West Africa as well as Central Africa are both posited as the ancestral homelands of some of Anguilla's early African population.
During the early colonial period, Anguilla was administered by the British through Antigua, but in 1824 it was placed under the administrative control of nearby Saint Kitts. In 1967, Britain granted Saint Kitts and Nevis full internal autonomy, and Anguilla was also incorporated into the new unified dependency, named Saint Christopher-Nevis-Anguilla, against the wishes of many Anguillians. This led to two rebellions in 1967 and 1969 (Anguillian Revolution), headed by Ronald Webster, and a brief period as a self-declared independent republic. The goal of the revolution was not independence per se, but rather independence from Saint Kitts and Nevis, and a return to being a British colony. British authority was fully restored in July 1971, and in 1980 Anguilla was finally allowed to secede from Saint Kitts and Nevis and become a separate British Crown colony (now a British overseas territory).
Governance.
Political system.
Anguilla is an internally self-governing overseas territory of the United Kingdom. Its politics take place in a framework of a parliamentary representative democratic dependency, whereby the Chief Minister is the head of government, and of a pluriform multi-party system.
The United Nations Committee on Decolonization includes Anguilla on the United Nations list of Non-Self-Governing Territories. The territory's constitution is Anguilla Constitutional Order 1 April 1982 (amended 1990). Executive power is exercised by the government. Legislative power is vested in both the government and the House of Assembly. The Judiciary is independent of the executive and the legislature.
Defence.
Anguilla being a dependency of the UK, the UK is responsible for its military defence, although there are no active garrison or armed forces present. Anguilla has a small marine police force comprising around 32 personnel which operates one M160-class fast patrol boat
Geography.
Anguilla is a flat, low-lying island of coral and limestone in the Caribbean Sea, east of Puerto Rico and the Virgin Islands. It is directly north of Saint Martin, separated from that island by the Anguilla Channel. The soil is generally thin and poor, supporting scrub tropical and forest vegetation.
Anguilla is noted for its spectacular and ecologically important coral reefs and beaches. Apart from the main island of Anguilla itself, the territory includes a number of other smaller islands and cays, mostly tiny and uninhabited. Some of these are:
Climate.
Temperature.
Northeastern trade winds keep this tropical island relatively cool and dry. Average annual temperature is 80 °F (27 °C). July–October is its hottest period, December–February, its coolest.
Rainfall.
Rainfall averages 35 inches (890 mm) annually, although the figures vary from season to season and year to year. The island is subject to both sudden tropical storms and hurricanes, which occur in the period from July to November. The island suffered damage in 1995 from Hurricane Luis and severe flooding 5–20 feet from Hurricane Lenny.
Economy.
Anguilla's thin arid soil is largely unsuitable for agriculture, and the island has few land-based natural resources. Its main industries are tourism, offshore incorporation and management, offshore banking, captive insurance and fishing.
Before the 2008 world-wide crisis the economy of Anguilla was expanding rapidly, especially the tourism sector which was driving major new developments in partnerships with multi-national companies.
Anguilla's currency is the East Caribbean dollar, though the US dollar is also widely accepted. The exchange rate is fixed to the US dollar at US$1 = EC$2.70.
The economy, and especially the tourism sector, suffered a setback in late 1995 due to the effects of Hurricane Luis in September but recovered in 1996. Hotels were hit particularly hard during this time. Another economic setback occurred during the aftermath of Hurricane Lenny in 2000.
Anguilla's financial system comprises 7 banks, 2 money services businesses, more than 40 company managers, more than 50 insurers, 12 brokers, more than 250 captive intermediaries, more than 50 mutual funds, 8 trust companies.
Although in 2011 Anguilla became the fifth largest jurisdiction for Captive Insurance, behind Bermuda, Cayman, Vermont and Guernsey. there has been little growth. Most of the upswing in Anguilla captive registrations came from the large exodus of insurers leaving the British Virgin Islands beginning in 2008 with the change in leadership in the BVI's insurance department. Since 2010, there has been a series of regulators (Richard Hands and Keith Bell), who, like in the BVI, were not conducive to doing business in Anguilla. Since 2011, the growth of domestic domiciles combined with the headwinds created by infighting in the Anguilla government with the Financial Services Commission and the poor regulatory atmosphere, have stymied the grown in net new formations in Anguilla. The few captive management firms with staffed offices in Anguilla provide only very limited services locally. At the same time, changes in U.S. law has made forming an offshore captive more of a concern.
Anguilla aims to obtain 15% of its energy from solar power so it is less reliant on expensive imported diesel. The Climate & Development Knowledge Network is helping the government gather the information it needs to change the territory's legislation, so it can integrate renewables into its grid. Barbados, have also made good progress in switching to renewables, but many other SIDS are still at the early stages of planning how to integrate renewable energy into their grids. “For a small island we’re very far ahead,” said Beth Barry, Coordinator of the Anguilla Renewable Energy Office. 'We’ve got an Energy Policy and a draft Climate Change policy and have been focussing efforts on the question of sustainable energy supply for several years now. As a result we have a lot of information we can share with other islands.”
Transportation.
Air.
Anguilla is served by Clayton J. Lloyd International Airport (prior to 4 July 2010 known as Wallblake Airport). The primary runway at the airport is in length and can accommodate moderate-sized aircraft. Services connect to various other Caribbean islands via regional carrier LIAT, local charter airlines and others. Although there are no direct scheduled flights to or from continental America or Europe, Tradewind Aviation and Cape Air provide scheduled air service to San Juan, Puerto Rico. The airport can handle large narrow-body jets such as the Boeing 727, Boeing 737 and Boeing 757.
Road.
Aside from taxis, there is no public transport on the island. Cars drive on the left.
Boat.
There are regular ferries from Saint Martin to Anguilla. It is a 20 minute crossing from Marigot, St. Martin to Blowing Point, Anguilla. Ferries commence service from 7:00 am. There is also a Charter Service, from Blowing Point, Anguilla to Princess Juliana Airport to make travel easier. This way of travel is the most common method of transport between Anguilla and St. Martin or St. Maarten.
Demographics.
The majority of residents (90.08%) are black, the descendants of slaves transported from Africa. Growing minorities include whites at 3.74% and people of mixed race at 4.65% (figures from 2001 census).
72% of the population is Anguillian while 28% is non-Anguillian (2001 census). Of the non-Anguillian population, many are citizens of the United States, United Kingdom, St Kitts & Nevis, the Dominican Republic, Jamaica and Nigeria.
2006 and 2007 saw an influx of large numbers of Chinese, Indian, and Mexican workers, brought in as labour for major tourist developments due to the local population not being large enough to support the labour requirements.
Culture.
The Anguilla National Trust (ANT) was established in 1988 and opened its offices in 1993 charged with the responsibility of preserving the heritage of the island, including its cultural heritage. The Trust has programmes encouraging Anguillian writers and the preservation of the island's history.
The island's cultural history begins with the Taino Indians. Artifacts have been found around the island, telling of life before European settlers arrived.
As throughout the Caribbean, holidays are a cultural fixture. Anguilla's most important holidays are of historic as much as cultural importance – particularly the anniversary of the emancipation (previously August Monday in the Park), celebrated as the Summer Festival. British festivities, such as the Queen's birthday, are also celebrated.
Cuisine.
Anguillian cuisine is influenced by native Caribbean, African, Spanish, French and English cuisines. Seafood is abundant, and includes prawns, shrimp, crab, spiny lobster, conch, mahi-mahi, red snapper, marlin and grouper. Salt cod is a staple food eaten by itself and used in stews, casseroles and soups. Livestock is limited due to the small size of the island, and people there utilise poultry, pork, goat and mutton, along with imported beef. Goat is the most commonly eaten meat, and is utilised in a variety of dishes.
A significant amount of the island's produce is imported due to limited land suitable for agriculture production; much of the soil is sandy and infertile. Among the agriculture produced in Anguilla includes tomatoes, peppers, limes and other citrus fruits, onion, garlic, squash, pigeon peas and callaloo. Starch staple foods include imported rice and other foods that are imported or locally grown, including yams, sweet potatoes and breadfruit.
Language.
Today most people in Anguilla speak a British-influenced variety of 'Standard' English. Other languages are also spoken on the island, including varieties of Spanish, Chinese and the languages of other immigrants. However, the most common language other than Standard English is the island's own English-lexifier Creole language (not to be confused with French Creole spoken in islands such as Haiti, Martinique, and Guadeloupe). It is referred to locally by terms such as 'dialect' (pronounced 'dialek'), Anguilla Talk, or 'Anguillian'. It has its main roots in early varieties of English and West African languages, and is similar to the dialects spoken in English-speaking islands throughout the Eastern Caribbean, in terms of its structural features and to the extent of being considered one single language.
Linguists who are interested in the origins of Anguillian and other Caribbean Creoles point out that some of its grammatical features can be traced to African languages while others can be traced to European languages. Three areas have been identified as significant for the identification of the linguistic origins of those forced migrants who arrived before 1710: the Gold Coast, the Slave Coast, and the Windward Coast.
Sociohistorical information from Anguilla's archives suggest that Africans and Europeans formed two distinct, but perhaps overlapping speech communities in the early phases of the island's colonisation. 'Anguillian' is believed to have emerged as the language of the masses as time passed, slavery was abolished, and locals began to see themselves as 'belonging' to Anguillian society.
Religion.
Religion is another aspect of Anguilla's cultural history. The Christian Church did not have a consistent or strong presence across the initial period of English colonisation; during this period the spiritual and religious practices of Europeans and Africans tended to reflect their regional origins. However, it should be noted that some Africans are likely to have encountered Christianity prior to their immigration to the island, in West Africa as well as on other Caribbean islands. As early as 1813 Christian ministers formally ministered to enslaved Africans and promoted literacy in English among converts. The Wesleyan Missionary Society of England built churches and schools in 1817.
According to the 2001 census, Christianity is Anguilla's predominant religion, with 29.0 percent of the population practising Anglicanism. Another 23.9 percent are Methodist. Other churches on the island include Seventh-day Adventist, Baptist, Roman Catholic, and Jehovah's Witnesses (0.7%). Between 1992 and 2001 the number of followers of the Church of God and Pentecostal Churches increased considerably. There are at least 15 churches on the island, several of architectural interest. Although a minority on the island, it is an important location to followers of Rastafarian religion – Anguilla is the birthplace of Robert Athlyi Rogers, author of The Holy Piby which has had a strong influence on Rastafarian beliefs. Various other religions are practised as well.
Sport.
Boat racing has deep roots in Anguillian culture, and is the national sport. There are regular sailing regattas on national holidays, such as Carnival, which are contested by locally built and designed boats. These boats have names and have sponsors that print their logo on their sails.
As in many other former British Colonies, cricket is also a popular sport. Anguilla is the home of Omari Banks, who played for the West Indies Cricket Team, while Cardigan Connor played first-class cricket for English county side Hampshire and was 'chef de mission' (team manager) for Anguilla's Commonwealth Games team in 2002.
Rugby union is represented in Anguilla by the Anguilla Eels RFC, who were formed in April 2006. The Eels have been finalists in the St. Martin tournament in November 2006 and semi finalists in 2007, 2008, 2009 and Champions in 2010. The Eels were formed in 2006 by Scottish club national second row Martin Welsh, Club Sponsor and President of the AERFC Ms Jacquie Ruan, and Canadian standout Scrumhalf Mark Harris (Toronto Scottish RFC). The club was lucky enough to host the HMS Iron Duke in September 2008 which saw a very spirited game going to the visitors 18-13. The St Barts Barracudas have also been to Anguilla to play the Eels also prevailing eleven points to six.
Anguilla is also the home of Zharnel Hughes, who specialises in the 100m and 200m sprint. He won the 100m in the 2013 CARIFTA Games in a time of 10.44 seconds, despite his time being some way below his PB.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1223'>
Telecommunications in Anguilla
This article is about communications systems in Anguilla.
Telephone.
Telephones - main lines in use: 6,200 (2002)
Telephones - mobile cellular: 1,800 (2002)
Telephone system:
<br>'Domestic:' Modern internal telephone system
<br>'International:' EAST CARIBBEAN FIBRE SYSTEM ECFS (cable system)
' microwave radio relay to island of Saint Martin (Guadeloupe and Netherlands Antilles)
Mobile Phone (GSM).
Mobile Phone Operators:
Cable & Wireless (Anguilla) Ltd. - GSM 850 and 1900 MHz with Island-wide coverage
Weblinks -
Mobiles: ? (2007)
Radio.
Radio broadcast stations: AM 2, FM 7, shortwave 0 (2007)
Radios: 3,000 (1997)
Television.
Television broadcast stations: 1 (1997)
Televisions: 1,000 (1997)
Internet.
Internet country code: .ai (Top level domain)
Internet Service Providers (ISPs): 3 (Cable & Wireless - , Weblinks - , Caribbean Cable Communications - )
Internet hosts: 205 (2008)
Internet: users: 3,000 (2002)
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1227'>
Ashmore and Cartier Islands
The Territory of Ashmore and Cartier Islands is an uninhabited external territory of Australia consisting of four low-lying tropical islands in two separate reefs, and the 12 nautical mile territorial sea generated by the islands. The territory is located in the Indian Ocean situated on the edge of the continental shelf, about off the northwest coast of Australia and south of the Indonesian island of Rote.
Geography.
The territory includes 155.4 km2 Ashmore Reef (including West, Middle, and East Islands, and two lagoons within the reef) and 44.0 km2 Cartier Reef (including Cartier Island). They have a total of of shoreline, measured along the outer edge of the reef. There are no ports or harbours, only offshore anchorage.
West, Middle, and East Islands have a combined land area variously reported as 54 ha, 93 ha, and 112 ha (1 hectare is 0.01 km2, or about 2.5 acres). Cartier Island is an unvegetated sand island, with a reported land area of 0.4 ha.
Ashmore Reef is called 'Pulau Pasir' by Indonesians. In the Rote Island language, it is called 'Nusa Solokaek'. Both names have the meaning 'Sand Island'.
Nearby Hibernia Reef, northeast of Ashmore Reef, is not part of the territory, but rather belongs to Western Australia. It has no permanently dry land area, although large parts of the reef become exposed during low tide.
Government.
The territory is administered from Canberra by the Department of Regional Australia, Local Government, Arts and Sport, which is also responsible for the administration of the territories of Christmas Island, Cocos (Keeling) Islands, the Coral Sea Islands, Jervis Bay Territory and Norfolk Island. As part of the Machinery of Government Changes following the 2010 Federal Election, administrative responsibility for Territories was transferred from the Attorney General's Department to the Department of Regional Australia, Local Government, Arts and Sport. Defence of Ashmore and Cartier Islands is the responsibility of Australia, with periodic visits by the Royal Australian Navy, Royal Australian Air Force and Australian Customs and Border Protection Service. The vessel ACV Ashmore Guardian is stationed more-or-less permanently off the reef. The islands are also visited by seasonal caretakers and occasional scientific researchers.
On 21 October 2002 the nature reserve was recognised as a wetland of international importance when it was designated Ramsar Site 1220 under the Ramsar Convention on Wetlands.
Due to its proximity to Indonesia, and the area being traditional fishing grounds of Indonesian fishermen for centuries, some Indonesian groups claims Ashmore Reef to be part of Rote Ndao Regency of East Nusa Tenggara province. However, the Indonesian government does not appear to actively contest Australia's possession of the territory. Australia's sovereignty is backed up by the fact that the territory was not administered by the Netherlands (Indonesia's former colonial power), but by the British before it was transferred to Australia.
A memorandum of understanding between the Australian and Indonesian governments allows Indonesian fishermen access to their traditional fishing grounds within the region without any formal visa arrangements, subject to limits.
Ecology and environment.
Ashmore Reef Commonwealth Marine Reserve.
The Ashmore Reef Commonwealth Marine Reserve (formerly Ashmore Reef National Nature Reserve), established in August 1983, comprises an area of approximately 583 km2. It is of significant biodiversity value as it is in the flow of the Indonesian Throughflow ocean current from the Pacific Ocean through Maritime Southeast Asia to the Indian Ocean. It is also in a surface current west from the Arafura Sea and Timor Sea.
The Reserve comprises several marine habitats, including seagrass meadows, intertidal sand flats, coral reef flats, and lagoons, and supports an important and diverse range of species, including 14 species of sea snakes, a population of dugong that may be genetically distinct, a diverse marine invertebrate fauna, and many endemic species, especially of sea snakes and molluscs. There are feeding and nesting sites for ;oggerhead, hawksbill and green turtles. It is classified as an Important Bird Area and has 50,000 breeding pairs of various kinds of seabirds. A high abundance and diversity of sea cucumbers, over-exploited on other reefs in the region, is present, with 45 species recorded.
Cartier Island Commonwealth Marine Reserve.
Cartier Island Commonwealth Marine Reserve (formerly Cartier Island Marine Reserve), established in June 2000, comprises an area of approximately 172 km2, within a 4 nautical mile radius from the center of Cartier Island, and extends to a depth of 1 km below the sea floor. It includes the reef around Cartier island, a small submerged pinaccle called Wave Governor Bank, and two shallow pools to the island's northeast.
Economy and migration.
There is no economic activity in the Territory. As Ashmore Reef is the closest point of Australian territory to Indonesia, it was a popular target for people smugglers transporting asylum seekers to Australia despite its only wells being infected with cholera or contaminated and undrinkable. Once they had landed on Ashmore, asylum seekers could claim to have entered Australian territory and request to be processed as refugees. The use of Ashmore for this purpose created great notoriety during late 2001, when refugee arrivals became a major political issue in Australia. As Australia was not the country of first asylum for these 'boat people', the Australian Government did not consider that it had a responsibility to accept them.
A number of things were done to discourage the practice such as attempting to have the people smugglers arrested in Indonesia; the so-called Pacific Solution of processing them in third countries; the boarding and forced turnaround of the boats by Australian military forces, and finally excising Ashmore and many other small islands from the Australian migration zone. Two boatloads of asylum seekers were each detained for several days in the lagoon at Ashmore after failed attempts by the Royal Australian Navy to turn them back to Indonesia in October 2001.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1234'>
Acoustic theory
Acoustic theory is the field relating to mathematical description of sound waves. It is derived from fluid dynamics. See acoustics for the engineering approach.
The propagation of sound waves in a fluid (such as water) can be modeled by an equation of motion (conservation of momentum) and an equation of continuity (conservation of mass). With some simplifications, in particular constant density, they can be given as follows:
where formula_2 is the acoustic pressure and formula_3 is the acoustic fluid velocity vector, formula_4 is the vector of spatial coordinates formula_5, formula_6 is the time, formula_7 is the static mass density of the medium and formula_8 is the bulk modulus of the medium. The bulk modulus can be expressed in terms of the density and the speed of sound in the medium (formula_9) as
If the acoustic fluid velocity field is irrotational, formula_11, then the acoustic wave equation is a combination of these two sets of balance equations and can be expressed as
where we have used the vector Laplacian, formula_13
The acoustic wave equation (and the mass and momentum balance equations) are often expressed in terms of a scalar potential formula_14 where formula_15. In that case the acoustic wave equation is written as
and the momentum balance and mass balance are expressed as
Derivation of the governing equations.
The derivations of the above equations for waves in an acoustic medium are given below.
Conservation of momentum.
The equations for the conservation of linear momentum for a fluid medium are
where formula_19 is the body force per unit mass, formula_20 is the pressure, and formula_21 is the deviatoric stress. If formula_22 is the Cauchy stress, then
where formula_24 is the rank-2 identity tensor.
We make several assumptions to derive the momentum balance equation for an acoustic medium. These assumptions and the resulting forms of the momentum equations are outlined below.
Assumption 1: Newtonian fluid.
In acoustics, the fluid medium is assumed to be Newtonian. For a Newtonian fluid, the deviatoric stress tensor is related to the velocity by
where formula_26 is the shear viscosity and formula_27 is the bulk viscosity.
Therefore, the divergence of formula_21 is given by
Using the identity formula_30, we have
The equations for the conservation of momentum may then be written as
Assumption 2: Irrotational flow.
For most acoustics problems we assume that the flow is irrotational, that is, the vorticity is zero. In that case
and the momentum equation reduces to
Assumption 3: No body forces.
Another frequently made assumption is that effect of body forces on the fluid medium is negligible. The momentum equation then further simplifies to
Assumption 4: No viscous forces.
Additionally, if we assume that there are no viscous forces in the medium (the bulk and shear viscosities are zero), the momentum equation takes the form
Assumption 5: Small disturbances.
An important simplifying assumption for acoustic waves is that the amplitude of the disturbance of the field quantities is small. This assumption leads to the linear or small signal acoustic wave equation. Then we can express the variables as the sum of the (time averaged) mean field (formula_37) that varies in space and a small fluctuating field (formula_38) that varies in space and time. That is
and
Then the momentum equation can be expressed as
Since the fluctuations are assumed to be small, products of the fluctuation terms can be neglected (to first order) and we have
Assumption 6: Homogeneous medium.
Next we assume that the medium is homogeneous; in the sense that the time averaged variables
formula_43 and formula_44 have zero gradients, i.e.,
The momentum equation then becomes
Assumption 7: Medium at rest.
At this stage we assume that the medium is at rest which implies that the mean velocity is zero, i.e. formula_47. Then the balance of momentum reduces to
Dropping the tildes and using formula_49, we get the commonly used form of the acoustic momentum equation
Conservation of mass.
The equation for the conservation of mass in a fluid volume (without any mass sources or sinks) is given by
where formula_52 is the mass density of the fluid and formula_53 is the fluid velocity.
The equation for the conservation of mass for an acoustic medium can also be derived in a manner similar to that used for the conservation of momentum.
Assumption 1: Small disturbances.
From the assumption of small disturbances we have
and
Then the mass balance equation can be written as
If we neglect higher than first order terms in the fluctuations, the mass balance equation becomes
Assumption 2: Homogeneous medium.
Next we assume that the medium is homogeneous, i.e.,
Then the mass balance equation takes the form
Assumption 3: Medium at rest.
At this stage we assume that the medium is at rest, i.e., formula_47. Then the mass balance equation can be expressed as
Assumption 4: Ideal gas, adiabatic, reversible.
In order to close the system of equations we need an equation of state for the pressure. To do that we assume that the medium is an ideal gas and all acoustic waves compress the medium in an adiabatic and reversible manner. The equation of state can then be expressed in the form of the differential equation:
where formula_63 is the specific heat at constant pressure, formula_64 is the specific heat at constant volume, and formula_65 is the wave speed. The value of formula_66 is 1.4 if the acoustic medium is air.
For small disturbances
where formula_9 is the speed of sound in the medium.
Therefore,
The balance of mass can then be written as
Dropping the tildes and defining formula_71 gives us the commonly used expression for the balance of mass in an acoustic medium:
Governing equations in cylindrical coordinates.
If we use a cylindrical coordinate system formula_73 with basis vectors formula_74, then the gradient of formula_20 and the divergence of formula_76 are given by
where the velocity has been expressed as formula_78.
The equations for the conservation of momentum may then be written as
In terms of components, these three equations for the conservation of momentum in cylindrical coordinates are
The equation for the conservation of mass can similarly be written in cylindrical coordinates as
Time harmonic acoustic equations in cylindrical coordinates.
The acoustic equations for the conservation of momentum and the conservation of mass are often expressed in time harmonic form (at fixed frequency). In that case, the pressures and the velocity are assumed to be time harmonic functions of the form
where formula_83 is the frequency. Substitution of these expressions into the governing equations in cylindrical coordinates gives us the fixed frequency form of the conservation of momentum
and the fixed frequency form of the conservation of mass
Special case: No z-dependence.
In the special case where the field quantities are independent of the z-coordinate we can eliminate formula_86 to get
Assuming that the solution of this equation can be written as
we can write the partial differential equation as
The left hand side is not a function of formula_90 while the right hand side is not a function of formula_91. Hence,
where formula_93 is a constant. Using the substitution
we have
The equation on the left is the Bessel equation which has the general solution
where formula_97 is the cylindrical Bessel function of the first kind and formula_98 are undetermined constants. The equation on the right has the general solution
where formula_100 are undetermined constants. Then the solution of the acoustic wave equation is
Boundary conditions are needed at this stage to determine formula_102 and the other undetermined constants.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1235'>
Alexander Mackenzie
Alexander Mackenzie, PC (January 28, 1822 – April 17, 1892), a building contractor and newspaper editor, was the second Prime Minister of Canada from November 7, 1873 to October 8, 1878.
Biography.
He was born in Logierait, Perthshire, Scotland to Alexander Mackenzie Sr. and Mary Stewart Fleming. He was the third of ten children. At the age of 13, Mackenzie's father died, and he was forced to end his formal education in order to help support his family. At the age of 16 he apprenticed as a stonemason and by the age of 20 he had reached journeyman status in this field. Mackenzie immigrated to Canada in 1842 to seek a better life as well as to follow his sweetheart, Helen Neil. Shortly thereafter, he converted from Presbyterianism to Baptist beliefs. Mackenzie's faith was to link him to the increasingly influential temperance cause, particularly strong in Ontario where he lived, a constituency of which he was to represent in the Parliament of Canada.
Mackenzie married Helen Neil (1826–52) in 1845 and with her had three children, with only one girl surviving infancy. In 1853, he married Jane Sym (1825–93).
In Canada, Mackenzie continued his career as a stonemason, building many structures that still stand today. He began working as a general contractor, earning a reputation for being a hard working, honest man as well as having a working man's view on fiscal policy.
Mackenzie involved himself in politics almost from the moment he arrived in Canada. He campaigned relentlessly for George Brown, owner of the Reformist paper The Globe in the 1851 election, helping him to win a seat in the assembly. In 1852 Mackenzie became editor of another reformist paper, the Lambton Shield. As editor, Mackenzie was perhaps a little too vocal, leading the paper to a suit of law for libel against the local conservative candidate. The paper lost the suit and was forced to fold due to financial hardship. Mackenzie was elected to the Legislative Assembly as a supporter of George Brown in 1861.
Prime Minister (1873-1878).
When the Macdonald government fell due to the Pacific Scandal in 1873, the Governor General, Lord Dufferin, called upon Mackenzie, who had been chosen as the leader of the Liberal Party a few months earlier, to form a new government. Mackenzie formed a government and asked the Governor General to call an election for January 1874. The Liberals won, and Mackenzie remained prime minister until the 1878 election when Macdonald's Conservatives returned to power with a majority government.
It was unusual for a man of Mackenzie's humble origins to attain such a position in an age which generally offered such opportunity only to the privileged. Lord Dufferin, the current Governor General, expressed early misgivings about a stonemason taking over government. But on meeting Mackenzie, Dufferin revised his opinions: 'However narrow and inexperienced Mackenzie may be, I imagine he is a thoroughly upright, well-principled, and well-meaning man.'
Mackenzie also served as Minister of Public Works and oversaw the completion of the Parliament Buildings. While drawing up the plans, he included a circular staircase leading directly from his office to the outside of the building which allowed him to escape the patronage-seekers waiting for him in his ante-chamber. Proving Dufferin's reflections on his character to be true, Mackenzie disliked intensely the patronage inherent in politics. Nevertheless, he found it a necessary evil in order to maintain party unity and ensure the loyalty of his fellow Liberals.
In keeping with his democratic ideals, Mackenzie refused the offer of a knighthood three times, and was thus the only one of Canada's first eight Prime Ministers not to be knighted. His pride in his working class origins never left him. Once, while touring Fort Henry as prime minister, he asked the soldier accompanying him if he knew the thickness of the wall beside them. The embarrassed escort confessed that he didn't and Mackenzie replied, 'I do. It is five feet, ten inches. I know, because I built it myself!'
As Prime Minister, Alexander Mackenzie strove to reform and simplify the machinery of government. He introduced the secret ballot; advised the creation of the Supreme Court of Canada; the establishment of the Royal Military College of Canada in Kingston in 1874; the creation of the Office of the Auditor General in 1878; and struggled to continue progress on the national railway.
However, his term was marked by economic depression that had grown out of the Panic of 1873, which Mackenzie's government was unable to alleviate. In 1874, Mackenzie negotiated a new free trade agreement with the United States, eliminating the high protective tariffs on Canadian goods in US markets. However, this action did not bolster the economy, and construction of the CPR slowed drastically due to lack of funding. In 1876 the Conservative opposition announced a National Policy of protective tariffs, which resonated with voters. When an election was held at the conclusion of Mackenzie's five-year term, the Conservatives were swept back into office in a landslide victory.
After his government's defeat, Mackenzie remained Leader of the Opposition for another two years, until 1880. He remained an MP until his death in 1892 from a stroke that resulted from hitting his head during a fall. He died in Toronto and was buried in Lakeview Cemetery in Sarnia, Ontario.
In their 1999 study of the Prime Ministers of Canada, which included the results of a survey of Canadian historians, J.L. Granatstein and Norman Hillmer found that Mackenzie was in the No. 11 place just after John Sparrow David Thompson.
Namesakes.
The following are named in honour of Alexander Mackenzie:
Supreme Court appointments.
Mackenzie chose the following jurists to be appointed as justices of the Supreme Court of Canada by the Governor General:
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1239'>
Ashoka
Ashoka Maurya (), (304–232 BCE), commonly known as Ashoka and also as Ashoka the Great, was an Indian emperor of the Maurya Dynasty who ruled almost all of the Indian subcontinent from circa 269 BCE to 232 BCE. One of India's greatest emperors, Ashoka reigned over a realm that stretched from the Hindu Kush mountains in the west to Bengal in the East and covered the entire Indian subcontinent except parts of present day Tamil Nadu and Kerala. The empire's capital was Pataliputra (in Magadha, present-day Bihar), with provincial capitals at Taxila and Ujjain.
In about 260 BCE Ashoka waged a bitterly destructive war against the state of Kalinga (modern Odisha). He conquered Kalinga, which none of his ancestors had done. He embraced Buddhism after witnessing the mass deaths of the Kalinga War, which he himself had waged out of a desire for conquest. 'Ashoka reflected on the war in Kalinga, which reportedly had resulted in more than 100,000 deaths and 150,000 deportations.' Ashoka converted gradually to Buddhism beginning about 263 BCE. He was later dedicated to the propagation of Buddhism across Asia, and established monuments marking several significant sites in the life of Gautama Buddha. 'Ashoka regarded Buddhism as a doctrine that could serve as a cultural foundation for political unity.' Ashoka is now remembered as a philanthropic administrator. In the Kalinga edicts, he addresses his people as his 'children', and mentions that as a father he desires their good.
Ashoka is referred to as 'Samraat Chakravartin Ashoka' – the 'Emperor of Emperors Ashoka.' His name ' means 'painless, without sorrow' in Sanskrit (the 'a' privativum and 'śoka' 'pain, distress'). In his edicts, he is referred to as ' (Pali ' or 'The Beloved of the Gods'), and ' (Pali ' or 'He who regards everyone with affection'). His fondness for his name's connection to the Saraca asoca tree, or the 'Ashoka tree' is also referenced in the Ashokavadana.
H.G. Wells wrote of Ashoka in his book 'The Outline of History': 'Amidst the tens of thousands of names of monarchs that crowd the columns of history, their majesties and graciousnesses and serenities and royal highnesses and the like, the name of Ashoka shines, and shines, almost alone, a star.' Along with the Edicts of Ashoka, his legend is related in the 2nd-century 'Ashokavadana' ('Narrative of Ashoka',' a part of 'Divyavadana'), and in the Sri Lankan text 'Mahavamsa' ('Great Chronicle'). The emblem of the modern Republic of India is an adaptation of the Lion Capital of Ashoka.
Biography.
Ashoka's early life.
Ashoka was born to the Mauryan emperor Bindusara and a relatively lower ranked wife of his, Dharmā [or Dhammā]. He was the grandson of Chandragupta Maurya, founder of Mauryan dynasty. The Avadana texts mention that his mother was queen Subhadrangī. According to Ashokavadana, she was the daughter of a Brahmin from the city of Champa. Empress Subhadrangī was a Brahmin of the Ajivika sect, and was found to be a suitable match for Emperor Bindusara. Though a palace intrigue kept her away from the emperor, this eventually ended, and she bore a son. It is from her exclamation 'I am now without sorrow,' that Ashoka got his name. The 'Divyāvadāna' tells a similar story, but gives the name of the queen as Janapadakalyānī.
Ashoka had several elder siblings, all of whom were his half-brothers from other wives of Bindusara. His fighting qualities were apparent from an early age and he was given royal military training. He was known as a fearsome hunter, and according to a legend, killed a lion with just a wooden rod. Because of his reputation as a frightening warrior and a heartless general, he was sent to curb the riots in the Avanti province of the Mauryan empire.
Rise to power.
The Buddhist text 'Divyavadana' describes Ashoka putting down a revolt due to activities of wicked ministers. This may have been an incident in Bindusara's times. Taranatha's account states that Chanakya, one of Bindusara's great lords, destroyed the nobles and kings of 16 towns and made himself the master of all territory between the eastern and the western seas. Some historians consider this as an indication of Bindusara's conquest of the Deccan while others consider it as suppression of a revolt. Following this, Ashoka was stationed at Ujjayini as governor.
Bindusara's death in 273 BCE led to a war over succession. According to Divyavandana, Bindusara wanted his son Sushim to succeed him but Ashoka was supported by his father's ministers, who found Sushim to be arrogant and disrespectful towards them. A minister named Radhagupta seems to have played an important role in Ashoka's rise to the throne. The Ashokavadana recounts Radhagupta's offering of an old royal elephant to Ashoka for him to ride to the Garden of the Gold Pavilion where King Bindusara would determine his successor. Ashoka later got rid of the legitimate heir to the throne by tricking him into entering a pit filled with live coals. Radhagupta, according to the Ashokavadana, would later be appointed prime minister by Ashoka once he had gained the throne. The 'Dipavansa' and 'Mahavansa' refer to Ashoka's killing 99 of his brothers, sparing only one, named Vitashoka or Tissa, although there is no clear proof about this incident (many such accounts are saturated with mythological elements). The coronation happened in 269 BCE, four years after his succession to the throne.
Early life as emperor.
Buddhist legends state that Ashoka was of a wicked nature and bad temper. He also built Ashoka's Hell, an elaborate torture chamber, deemed the 'Paradisal Hell' because of its beautiful exterior contrasted with the acts carried out inside by his appointed executioner Girikaa, which earned him the name of 'çanḍa Ashoka' or 'Chandaashoka,' meaning 'Ashoka the Fierce' in Sanskrit. Professor Charles Drekmeier cautions that the Buddhist legends intend to dramatise the change that Buddhism brought in him, and therefore, exaggerate Ashoka's past wickedness and his piousness after the conversion.
Ascending the throne, Ashoka expanded his empire over the next eight years, from the present-day boundaries Assam in the East to Iran in the West; from the Pamir Knot in the north to the peninsula of southern India except for present day Tamil Nadu and Kerala which were ruled by the three ancient Tamil kingdoms.
Conquest of Kalinga.
While the early part of Ashoka's reign was apparently quite bloodthirsty, he became a follower of the Buddha's teachings after his conquest of Kalinga on the east coast of India in the present-day states of Odisha and North Coastal Andhra Pradesh. Kalinga was a state that prided itself on its sovereignty and democracy. With its monarchical parliamentary democracy it was quite an exception in ancient Bharata where there existed the concept of Rajdharma. Rajdharma means the duty of the rulers, which was intrinsically entwined with the concept of bravery and dharma. The Kalinga War happened eight years after his coronation. From his 13th inscription, we come to know that the battle was a massive one and caused the deaths of more than 100,000 soldiers and many civilians who rose up in defence; over 150,000 were deported. When he was walking through the grounds of Kalinga after his conquest, rejoicing in his victory, he was moved by the number of bodies strewn there and the wails of the kith and kin of the dead.
Buddhist conversion.
Edict 13 on the Edicts of Ashoka Rock Inscriptions reflect the great remorse the king felt after observing the destruction of Kalinga:
The edict goes on to address the even greater degree of sorrow and regret resulting from Ashoka's understanding that the friends and families of deceased would suffer greatly too.
Legend says that one day after the war was over, Ashoka ventured out to roam the city and all he could see were burnt houses and scattered corpses. This sight made him sick and he cried the famous monologue:
The lethal war with Kalinga transformed the vengeful Emperor Ashoka to a stable and peaceful emperor and he started patronising Buddhism. Whether or not he converted to Buddhism is unclear although Buddhist tradition mentions so. According to the prominent Indologist, A. L. Basham, Ashoka's personal religion became Buddhism, if not before, then certainly after the Kalinga war. However, according to Basham, the Dharma officially propagated by Ashoka was not Buddhism at all. Romila Thapar argues that modern day historians argue his conversion into Buddhism, in the aftermath of the Kalinga war. She argues that Ashoka curiously refrained from engraving his confession anywhere.
Nevertheless, his patronage led to the expansion of Buddhism in the Mauryan empire and other kingdoms during his rule, and worldwide from about 250 BCE. Prominent in this cause were his son Mahinda (Mahendra) and daughter Sanghamitra (whose name means 'friend of the Sangha'), who established Buddhism in Ceylon (now Sri Lanka).
Archaeological evidence for Buddhism between the death of the Buddha and the time of Ashoka is scarce; after the time of Ashoka it is abundant.
Death and legacy.
Ashoka ruled for an estimated forty years. Legend states that during his cremation, his body burned for seven days and nights. After his death, the Mauryan dynasty lasted just fifty more years until his empire stretched over almost all of the Indian subcontinent. Ashoka had many wives and children, but many of their names are lost to time. His supreme consort and first wife was Vidisha Mahadevi Shakyakumari Asandhimitra. Mahindra and Sanghamitra were twins born by her, in the city of Ujjain. He had entrusted to them the job of making Buddhism more popular across the known and the unknown world. Mahindra and Sanghamitra went into Sri Lanka and converted the King, the Queen and their people to Buddhism.
In his old age, he seems to have come under the spell of his youngest wife Tishyaraksha. It is said that she had got Ashoka's son Kunala, the regent in Takshashila and the heir presumptive to the throne, blinded by a wily stratagem. The official executioners spared Kunala and he became a wandering singer accompanied by his favourite wife Kanchanmala. In Pataliputra, Ashoka heard Kunala's song, and realised that Kunala's misfortune may have been a punishment for some past sin of the emperor himself. He condemned Tishyaraksha to death, restoring Kunala to the court. In the Ashokavadana, Kunala is portrayed as forgiving Tishyaraksha, having obtained enlightenment through Buddhist practice. While he urges Ashoka to forgive her as well, Ashoka does not respond with the same forgiveness. Kunala was succeeded by his son, Samprati, but his rule did not last long after Ashoka's death.
The reign of Ashoka Maurya might have disappeared into history as the ages passed by, had he not left behind records of his reign. These records are in the form of sculpted pillars and rocks inscribed with a variety of actions and teachings he wished to be published under his name. The language used for inscription was Prakrit.
In the year 185 BCE, about fifty years after Ashoka's death, the last Maurya ruler, Brihadratha, was assassinated by the commander-in-chief of the Mauryan armed forces, Pusyamitra Sunga, while he was taking the Guard of Honor of his forces. Pusyamitra Sunga founded the Sunga dynasty (185 BCE-78 BCE) and ruled just a fragmented part of the Mauryan Empire. Many of the northwestern territories of the Mauryan Empire (modern-day Afghanistan and Northern Pakistan) became the Indo-Greek Kingdom.
King Ashoka, the third monarch of the Indian Mauryan dynasty, has come to be regarded as one of the most exemplary rulers in world history.
Buddhist kingship.
One of the more enduring legacies of Ashoka Maurya was the model that he provided for the relationship between Buddhism and the state. Throughout Theravada Southeastern Asia, the model of rulership embodied by Ashoka replaced the notion of divine kingship that had previously dominated (in the Angkor kingdom, for instance). Under this model of 'Buddhist kingship', the king sought to legitimise his rule not through descent from a divine source, but by supporting and earning the approval of the Buddhist 'sangha'. Following Ashoka's example, kings established monasteries, funded the construction of stupas, and supported the ordination of monks in their kingdom. Many rulers also took an active role in resolving disputes over the status and regulation of the sangha, as Ashoka had in calling a conclave to settle a number of contentious issues during his reign. This development ultimately lead to a close association in many Southeast Asian countries between the monarchy and the religious hierarchy, an association that can still be seen today in the state-supported Buddhism of Thailand and the traditional role of the Thai king as both a religious and secular leader. Ashoka also said that all his courtiers always governed the people in a moral manner.
According to the legends mentioned in the 2nd-century CE text 'Ashokavadana', Ashoka was not non-violent after adopting Buddhism. In one instance, a non-Buddhist in Pundravardhana drew a picture showing the Buddha bowing at the feet of Nirgrantha Jnatiputra (identified with Mahavira, the founder of Jainism). On complaint from a Buddhist devotee, Ashoka issued an order to arrest him, and subsequently, another order to kill all the Ajivikas in Pundravardhana. Around 18,000 followers of the Ajivika sect were executed as a result of this order. Sometime later, another Nirgrantha follower in Pataliputra drew a similar picture. Ashoka burnt him and his entire family alive in their house. He also announced an award of one dinara (silver coin) to anyone who brought him the head of a Nirgrantha heretic. According to 'Ashokavadana', as a result of this order, his own brother was mistaken for a heretic and killed by a cowherd. These stories of persecutions of rival sects by Ashoka appear to be a clear fabrication arising out of sectarian propaganda.
Historical sources.
Ashoka was almost forgotten by the historians of the early British India, but James Prinsep contributed in the revelation of historical sources. Another important historian was British archaeologist John Hubert Marshall, who was director-General of the Archaeological Survey of India. His main interests were Sanchi and Sarnath, in addition to Harappa and Mohenjodaro. Sir Alexander Cunningham, a British archaeologist and army engineer, and often known as the father of the Archaeological Survey of India, unveiled heritage sites like the Bharhut Stupa, Sarnath, Sanchi, and the Mahabodhi Temple. Mortimer Wheeler, a British archaeologist, also exposed Ashokan historical sources, especially the Taxila.
Information about the life and reign of Ashoka primarily comes from a relatively small number of Buddhist sources. In particular, the Sanskrit 'Ashokavadana' ('Story of Ashoka'), written in the 2nd century, and the two Pāli chronicles of Sri Lanka (the Dipavamsa and Mahavamsa) provide most of the currently known information about Ashoka. Additional information is contributed by the Edicts of Ashoka, whose authorship was finally attributed to the Ashoka of Buddhist legend after the discovery of dynastic lists that gave the name used in the edicts ('Priyadarsi' – 'He who regards everyone with affection') as a title or additional name of Ashoka Maurya. Architectural remains of his period have been found at Kumhrar, Patna, which include an 80-pillar hypostyle hall.
Edicts of Ashoka -The Edicts of Ashoka are a collection of 33 inscriptions on the Pillars of Ashoka, as well as boulders and cave walls, made by Ashoka during his reign. These inscriptions are dispersed throughout modern-day Pakistan and India, and represent the first tangible evidence of Buddhism. The edicts describe in detail the first wide expansion of Buddhism through the sponsorship of one of the most powerful kings of Indian history, offering more information about Ashoka's proselytism, moral precepts, religious precepts, and his notions of social and animal welfare.
Ashokavadana – The Ashokavadana is a 2nd-century CE text related to the legend of Ashoka. The legend was translated into Chinese by Fa Hien in 300 CE. It is essentially a Hinayana text, and its world is that of Mathura and North-west India. The emphasis of this little known text is on exploring the relationship between the king and the community of monks (the Sangha) and setting up an ideal of religious life for the laity (the common man) by telling appealing stories about religious exploits. The most startling feature is that Ashoka’s conversion has nothing to do with the Kalinga war, which is not even mentioned, nor is there a word about his belonging to the Maurya dynasty. Equally surprising is the record of his use of state power to spread Buddhism in an uncompromising fashion. The legend of Veetashoka provides insights into Ashoka’s character that are not available in the widely known Pali records.
Mahavamsa -The Mahavamsa ('Great Chronicle') is a historical poem written in the Pali language of the kings of Sri Lanka. It covers the period from the coming of King Vijaya of Kalinga (ancient Odisha) in 543 BCE to the reign of King Mahasena (334–361). As it often refers to the royal dynasties of India, the Mahavamsa is also valuable for historians who wish to date and relate contemporary royal dynasties in the Indian subcontinent. It is very important in dating the consecration of Ashoka.
Dipavamsa -The Dipavamsa, or 'Deepavamsa', (i.e., Chronicle of the Island, in Pali) is the oldest historical record of Sri Lanka. The chronicle is believed to be compiled from Atthakatha and other sources around the 3rd or 4th century. King Dhatusena (4th century CE) had ordered that the Dipavamsa be recited at the Mahinda festival held annually in Anuradhapura.
Perceptions.
The use of Buddhist sources in reconstructing the life of Ashoka has had a strong influence on perceptions of Ashoka, as well as the interpretations of his Edicts. Building on traditional accounts, early scholars regarded Ashoka as a primarily Buddhist monarch who underwent a conversion to Buddhism and was actively engaged in sponsoring and supporting the Buddhist monastic institution. Some scholars have tended to question this assessment. The only source of information not attributable to Buddhist sources are the Ashokan Edicts, and these do not explicitly state that Ashoka was a Buddhist. In his edicts, Ashoka expresses support for all the major religions of his time: Buddhism, Brahmanism, Jainism, and Ajivikaism, and his edicts addressed to the population at large (there are some addressed specifically to Buddhists; this is not the case for the other religions) generally focus on moral themes members of all the religions would accept.
However, there is strong evidence in the edicts alone that he was a Buddhist. In one edict he belittles rituals, and he banned Vedic animal sacrifices; these strongly suggest that he at least did not look to the Vedic tradition for guidance. Furthermore, there are many edicts expressed to Buddhists alone; in one, Ashoka declares himself to be an 'upasaka', and in another he demonstrates a close familiarity with Buddhist texts. He erected rock pillars at Buddhist holy sites, but did not do so for the sites of other religions. He also used the word 'dhamma' to refer to qualities of the heart that underlie moral action; this was an exclusively Buddhist use of the word. Finally, the ideals he promotes correspond to the first three steps of the Buddha's graduated discourse.
Interestingly, the Ashokavadana presents an alternate view of the familiar Ashoka; one in which his conversion does not have anything to do with the Kalinga war or about his descent from the Maurya dynasty. Instead, Ashoka's reason for adopting non-violence appears much more personal. The Ashokavadana shows that the main source of Ashoka's conversion and the acts of welfare that followed are rooted instead in intense personal anguish at its core, from a wellspring inside himself (not so much necessarily spurred by a specific event). It thereby illuminates Ashoka as more humanly ambitious and passionate, with both greatness and flaws. 'This' Ashoka is very different from the 'shadowy do-gooder' of later Pali chronicles.
Much of the knowledge about Ashoka comes from the several inscriptions that he had carved on pillars and rocks throughout the empire. All his inscriptions present him as compassionate loving. In the Kalinga rock edits, he addresses his people as his 'children' and mentions that as a father he desires their good. These inscriptions promoted Buddhist morality and encouraged nonviolence and adherence to dharma (duty or proper behaviour), and they talk of his fame and conquered lands as well as the neighbouring kingdoms holding up his might. One also gets some primary information about the Kalinga War and Ashoka's allies plus some useful knowledge on the civil administration. The Ashoka Pillar at Sarnath is the most notable of the relics left by Ashoka. Made of sandstone, this pillar records the visit of the emperor to Sarnath, in the 3rd century BCE. It has a four-lion capital (four lions standing back to back) which was adopted as the emblem of the modern Indian republic. The lion symbolises both Ashoka's imperial rule and the kingship of the Buddha. In translating these monuments, historians learn the bulk of what is assumed to have been true fact of the Mauryan Empire. It is difficult to determine whether or not some actual events ever happened, but the stone etchings clearly depict how Ashoka wanted to be thought of and remembered.
Foci of debate.
Recently scholarly analysis determined that the three major foci of debate regarding Ashoka involve the nature of the Maurya empire; the extent and impact of Ashoka's pacifism, and what is referred to in the Inscriptions as 'dhamma' or dharma, which connotes goodness, virtue, and charity. Some historians have argued that Ashoka's pacifism undermined the 'military backbone' of the Maurya empire, while others have suggested that the extent and impact of his pacifism have been 'grossly exaggerated. The 'dhamma' of the Edicts has been understood as concurrently a Buddhist lay ethic, a set of politico-moral ideas, a 'sort of universal religion,' or as an Ashokan innovation. On the other hand, it has also been interpreted as an 'essentially political' ideology that sought to knit together a vast and diverse empire. Scholars are still attempting to analyse both the expressed and implied political ideas of the Edicts (particularly in regard to imperial vision), and make inferences pertaining to how that vision was grappling with problems and political realities of a 'virtually subcontinental, and culturally and economically highly variegated, 3rd century BCE Indian empire. Nonetheless, it remains clear that Ashoka's Inscriptions represent the earliest corpus of royal inscriptions in the Indian subcontinent, and therefore prove to be a very important innovation in royal practices.
Contributions.
Approach towards Religions.
According to Indian historian Romila Thapar, Ashoka emphasized respect for all religious teachers, harmonious relationship between parents and children, teachers and pupils, and employers and employees. Ashoka's religion contained gleanings from all religions and he made a law that prohibited anyone from any act or word against any religion. He emphasized on the virtues of 'Ahimsa', respect to all religious teachers, equal respect for and study of each other's scriptures, and on rational faith.
Global spread of Buddhism.
As a Buddhist emperor, Ashoka believed that Buddhism is beneficial for all human beings as well as animals and plants, so he built a number of stupas, Sangharama, viharas, chaitya, and residences for Buddhist monks all over South Asia and Central Asia. According to the Ashokavadana, he ordered the construction of 84,000 stupas to house the Buddhas relics. In the Aryamanjusrimulakalpa, Ashoka takes offerings to each of these stupas traveling in a chariot adorned with precious metals. He gave donations to viharas and mathas. He sent his only daughter Sanghamitra and son Mahindra to spread Buddhism in Sri Lanka (then known as Tamraparni). Ashoka also sent many prominent Buddhist monks (bhikshus) Sthaviras like Madhyamik Sthavira to modern Kashmir and Afghanistan; Maharaskshit Sthavira to Syria, Persia / Iran, Egypt, Greece, Italy and Turkey; Massim Sthavira to Nepal, Bhutan, China and Mongolia; Sohn Uttar Sthavira to modern Cambodia, Laos, Burma (old name Suvarnabhumi for Burma and Thailand), Thailand and Vietnam; Mahadhhamarakhhita stahvira to Maharashtra (old name Maharatthha); Maharakhhit Sthavira and Yavandhammarakhhita Sthavira to South India.
Ashoka also invited Buddhists and non-Buddhists for religious conferences. He inspired the Buddhist monks to compose the sacred religious texts, and also gave all types of help to that end. Ashoka also helped to develop viharas (intellectual hubs) such as Nalanda and Taxila. Ashoka helped to construct Sanchi and Mahabodhi Temple. Ashoka also gave donations to non-Buddhists. As his reign continued his even-handedness was replaced with special inclination towards Buddhism. Ashoka helped and respected both Sramans (Buddhists monks) and Brahmins (Vedic monks). Ashoka also helped to organise the Third Buddhist council (c. 250 BCE) at Pataliputra (today's Patna). It was conducted by the monk Moggaliputta-Tissa who was the spiritual teacher of the Mauryan Emperor Ashoka.'
It is well known that Ashoka sent 'dütas' or emissaries to convey messages or letters, written or oral (rather both), to various people. The VIth Rock Edict about 'oral orders' reveals this. It was later confirmed that it was not unusual to add oral messages to written ones, and the content of Ashoka's messages can be inferred likewise from the XIIIth Rock Edict: They were meant to spread his 'dhammavijaya,' which he considered the highest victory and which he wished to propagate everywhere (including far beyond India). There is obvious and undeniable trace of cultural contact through the adoption of the Kharosthi script, and the idea of installing inscriptions might have travelled with this script, as Achaemenid influence is seem in some of the formulations used by Ashoka in his inscriptions. This indicates to us that Ashoka was indeed in contact with other cultures, and was an active part in mingling and spreading new cultural ideas beyond his own immediate walls.
In his edicts, Ashoka mentions some of the people living in Hellenic countries as converts to Buddhism, although no Hellenic historical record of this event remains:
It is not too far-fetched to imagine, however, that Ashoka received letters from Greek rulers and was acquainted with the Hellenistic royal orders in the same way as he perhaps knew of the inscriptions of the Achaemenid kings, given the presence of ambassadors of Hellenistic kings in India (as well as the 'dütas' sent by Ashoka himself).
The Greeks in India even seem to have played an active role in the propagation of Buddhism, as some of the emissaries of Ashoka, such as Dharmaraksita, are described in Pali sources as leading Greek (Yona) Buddhist monks, active in spreading Buddhism (the Mahavamsa, XII).
As Administrator.
Ashoka's military power was strong, but after his conversion to Buddhism, he maintained friendly relations with three major Tamil kingdoms in the South namely Cheras, Cholas and Pandyas, the post Alexandrian empire, Tamraparni, and Suvarnabhumi. His edicts state that he made provisions for medical treatment of humans and animals in his own kingdom as well as in these neighbouring states. He also had wells dug and trees planted along the roads for the benefit of the common people.
Ashoka banned the slaughter and eating of the common cattle, and also imposed restrictions on fishing and fish-eating. He also abolished the royal hunting of animals and restricted the slaying of animals for food in the royal residence. Because he banned hunting, created many veterinary clinics and eliminated meat eating on many holidays, the Mauryan Empire under Ashoka has been described as 'one of the very few instances in world history of a government treating its animals as citizens who are as deserving of its protection as the human residents.'
Ashoka Chakra.
The Ashoka Chakra (the wheel of Ashoka) is a depiction of the Dharmachakra (see Dharmacakra) or Dhammachakka in Pali, the Wheel of Dharma (Sanskrit: Chakra means wheel). The wheel has 24 spokes which represent the 12 Laws of Dependent Origination and the 12 Laws of Dependent Termination. The Ashoka Chakra has been widely inscribed on many relics of the Mauryan Emperor, most prominent among which is the Lion Capital of Sarnath and The Ashoka Pillar. The most visible use of the Ashoka Chakra today is at the centre of the National flag of the Republic of India (adopted on 22 July 1947), where it is rendered in a Navy-blue color on a White background, by replacing the symbol of Charkha (Spinning wheel) of the pre-independence versions of the flag. The Ashoka Chakra can also been seen on the base of Lion Capital of Ashoka which has been adopted as the National Emblem of India.
The Ashoka Chakra was built by Ashoka during his reign. Chakra is a Sanskrit word which also means 'cycle' or 'self-repeating process.' The process it signifies is the cycle of time- as in how the world changes with time.
A few days before India became independent on August 1947, the specially formed Constituent Assembly decided that the flag of India must be acceptable to all parties and communities. A flag with three colours, Saffron, White and Green with the Ashoka Chakra was selected.
Pillars of Ashoka (Ashokstambha).
The pillars of Ashoka are a series of columns dispersed throughout the northern Indian subcontinent, and erected by Ashoka during his reign in the 3rd century BCE. Originally, there must have been many pillars of Ashoka although only ten with inscriptions still survive. Averaging between forty and fifty feet in height, and weighing up to fifty tons each, all the pillars were quarried at Chunar, just south of Varanasi and dragged, sometimes hundreds of miles, to where they were erected. The first Pillar of Ashoka was found in the 16th century by Thomas Coryat in the ruins of ancient Delhi. The wheel represents the sun time and Buddhist law, while the swastika stands for the cosmic dance around a fixed center and guards against evil.
There is no evidence of a swastika, or manji, on the pillars.
Lion Capital of Ashoka (Ashokmudra).
The Lion capital of Ashoka is a sculpture of four 'Indian lions' standing back to back. It was originally placed atop the Ashoka pillar at Sarnath, now in the state of Uttar Pradesh, India. The pillar, sometimes called the Ashoka Column is still in its original location, but the Lion Capital is now in the Sarnath Museum. This Lion Capital of Ashoka from Sarnath has been adopted as the National Emblem of India and the wheel 'Ashoka Chakra' from its base was placed onto the center of the National Flag of India.
The capital contains four lions (Indian / Asiatic Lions), standing back to back, mounted on an abacus, with a frieze carrying sculptures in high relief of an elephant, a galloping horse, a bull, and a lion, separated by intervening spoked chariot-wheels over a bell-shaped lotus. Carved out of a single block of polished sandstone, the capital was believed to be crowned by a 'Wheel of Dharma' (Dharmachakra popularly known in India as the 'Ashoka Chakra').
The Ashoka Lion capital or the Sarnath lion capital is also known as the national symbol of India. The Sarnath pillar bears one of the Edicts of Ashoka, an inscription against division within the Buddhist community, which reads, 'No one shall cause division in the order of monks.' The Sarnath pillar is a column surmounted by a capital, which consists of a canopy representing an inverted bell-shaped lotus flower, a short cylindrical abacus with four 24-spoked Dharma wheels with four animals (an elephant, a bull, a horse, a lion).
The four animals in the Sarnath capital are believed to symbolise different steps of Lord Buddha's life.
Besides the religious interpretations, there are some non-religious interpretations also about the symbolism of the Ashoka capital pillar at Sarnath. According to them, the four lions symbolise Ashoka's rule over the four directions, the wheels as symbols of his enlightened rule (Chakravartin) and the four animals as symbols of four adjoining territories of India.
Constructions credited to Ashoka.
The British restoration was done by under guidance from Ven.Weligama Sri Sumangala
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1241'>
American (word)
The meaning of the word 'American' in the English language varies according to the historical, geographical, and political context in which it is used. 'American' is derived from 'America', a term originally denoting all of the New World (also called 'the Americas'). In some expressions, it retains this Pan-American sense, but its usage has evolved over time and, for various historical reasons, the word came to denote people or things specifically from the United States of America.
In modern English, 'Americans' generally refers to residents of the United States; among native English speakers this usage is almost universal, with any other use of the term requiring specification. However, this default use has been the source of controversy, particularly among Latin Americans, who feel that using the term solely for the United States misappropriates it.
The word can be used as both a noun and an adjective. In adjectival use, it is generally understood to mean 'of or relating to the United States'; for example, 'Elvis Presley was an American singer' or 'the American President gave a speech today'. In noun form, it generally means U.S. citizen or national. The noun is rarely used in American English to refer to people not connected to the United States. When used with a grammatical qualifier, the adjective 'American' can mean 'of or relating to the Americas', as in Latin American or Indigenous American. Less frequently, the adjective can take this meaning without a qualifier, as in 'American Spanish dialects and pronunciation differ by country', or the name of the Organization of American States. A third use of the term pertains specifically to the indigenous peoples of the Americas, for instance, 'In the 16th century, many Americans died from imported diseases during the European conquest'.
Other languages.
English, French, German, Italian, Japanese, Hebrew, Arabic, Portuguese, and Russian speakers may use cognates of 'American' to refer to inhabitants of the Americas or to U.S. nationals. They generally have other terms specific to U.S. nationals, such as the German ', French ', Japanese , Arabic ' () (as opposed to ' []), and Italian '. These specific terms may be less common than the term 'American'.
In French, ', ' or ', from ' ('United States of America'), is a rarely used word that distinguishes U.S. things and persons from the adjective ', which denotes persons and things from the United States, but may also refer to 'the Americas'.
Likewise, German's use of ' and ' observe said cultural distinction, solely denoting U.S. things and people. Note that these are 'politically correct' terms and that in normal parlance, the adjective 'American' and its direct cognates are almost always used unless the context does not render the nationality of the person clear. For this reason, the style manual of the 'Neue Zürcher Zeitung' (one of the leading German-language newspapers) dismisses the term ' as both ′unnecessary′ and ′artificial′ and recommends replacing it with 'amerikanisch'. The respective guidelines of the foreign ministries of Austria, Germany and Switzerland all prescribe 'Amerikaner' and 'amerikanisch' for official usage, making no mention of ' or '.
Portuguese has ', denoting both a person or thing from the Americas and a U.S. national. For referring specifically to a U.S. national and things, the words used are ' (also spelled ') (United States person), from ', and ' ('Yankee'), but the term most often used is ', even though it could, as with its Spanish equivalent, apply to Canadians, Mexicans, etc. as well.
In Spanish, ' denotes geographic and cultural origin in the New World, as well as (infrequently) a U.S. citizen; the more common term is ' ('United States person'), which derives from ' ('United States of America'). The Spanish term ' ('North American'), is frequently used to refer things and persons from the United States, but this term can also denote people and things from Canada, Mexico, and the rest of North America.
In other languages, however, there is no possibility for confusion. For example, the Chinese word for 'U.S. natural' is ' () is derived from a word for the United States, ', where ' is an abbreviation for ' ('America [the continent]') and ' is 'country'. The name for the American continent is ', from ' plus ' ('continent'). Thus, a ' is an American in the generic sense, and a ' is an American in the U.S. sense.
Korean and Vietnamese also use unambiguous terms, with Korean having ' () for the country versus ' () for the continent, and Vietnamese having ' for the country versus ' for the continent. Japanese has such terms as well (' [ versus ' []), but they are found more in newspaper headlines than in speech, where ' predominates.
In Swahili, ' means specifically the United States, and ' is a U.S. national, whereas the international form ' refers to the continent, and ' would be an inhabitants thereof. Likewise, the Esperanto word ' refers to the continent. For the country there is the term '. Thus, a citizen of the United States is an ', whereas an ' is an inhabitant of the Americas.
History.
The name 'America' was derived by Martin Waldseemüller from 'Americus Vespucius', the Latinized version of Amerigo Vespucci, the name of the Italian merchant and cartographer who explored South America's east coast and the Caribbean Sea in the early 16th century. Later, Vespucci's published letters were the basis of Waldseemüller's 1507 map, which is the first usage of 'America'. The adjective 'American' subsequently denotes the New World's peoples and things.
16th-century European usage of 'American' denoted the native inhabitants of the New World. The earliest recorded use of this term in English is in Thomas Hacket's 1568 translation of André Thévet's book 'France Antarctique'; Thévet himself had referred to the natives as 'Ameriques'. In the following century, the term was extended to European settlers and their descendants in the Americas. The earliest recorded use of this term in English dates to 1648, in Thomas Gage's 'The English-American: A New Survet of the West Indies'. In English, 'American' was used especially for people in the British America, and came to be applied to citizens of the United States when the country was formed. The Declaration of Independence refers to '[the] unanimous Declaration of the thirteen united States of America' adopted by the 'Representatives of the united States of America' on July 4, 1776. The official name of the country was established on November 15, 1777, when the Second Continental Congress adopted the Articles of Confederation, the first of which says, 'The Stile of this Confederacy shall be 'The United States of America'. The Articles further state:
Common short forms and abbreviations are the 'United States', the 'U.S.', the 'U.S.A.', and 'America'; colloquial versions include the 'U.S. of A.' and 'the States'. The term 'Columbia' (from the Columbus surname), was a popular name for the U.S. and for the entire geographic Americas; its usage is present today only in the District of Columbia's name. Moreover, the womanly personification of Columbia appears in some official documents, including editions of the U.S. dollar.
In the 'Federalist Papers', Alexander Hamilton and James Madison used 'American' with two different meanings: political and geographic; 'the American republic' in Federalist Paper 51 and in Federalist Paper 70, and, in Federalist Paper 24, Hamilton used 'American' to denote the lands beyond the U.S.'s political borders.
United States President George Washington's 1796 Farewell Address said, 'The name of American, which belongs to you in your national capacity, must always exalt the just pride of patriotism more than any appellation.'
Early official U.S. documents show inconsistent usage; the 1778 Treaty of Alliance with France used 'the United States of North America' in the first sentence, then 'the said United States' afterwards; 'the United States of America' and 'the United States of North America' derive from 'the United Colonies of America' and 'the United Colonies of North America'. The Treaty of Peace and Amity of September 5, 1795 between the United States and the Barbary States contains the usages 'the United States of North America', 'citizens of the United States', and 'American Citizens'.
Originally, the name 'the United States' was plural—'the United States are'—a usage found in the U.S. Constitution's Thirteenth Amendment (1865), but its current usage is singular—'the United States is'. The plural was set in the term 'these United States'.
Semantic divergence among Anglophones did not affect the Spanish colonies. In 1801, the document titled 'Letter to American Spaniards'—published in French (1799), in Spanish (1801), and in English (1808)—might have influenced Venezuela's Act of Independence and its 1811 constitution.
The Latter-day Saints' Articles of Faith refer to the American continent as where they are to build Zion. The Old Catholic Encyclopedia's usage of 'America' is as 'the Western Continent or the New World'. It discusses American republics, ranging from the U.S. to
Usage at the United Nations.
Use of the term 'American' for U.S. nationals is common in United Nations,and financial markets in the United States are referred to as 'American financial markets'.
'American Samoa' is a recognized territorial name at the United Nations.
Cultural views.
Spain and Latin America.
The use of 'American' as a national demonym for U.S. nationals is challenged, primarily by Latin Americans. Spanish speakers in Spain and Latin America use the term ' to refer to people and things from the United States (from '), while ' refers to the continent as a whole. Through the 1992 edition the ', published by the Real Academia Española, did not include the United States definition in the entry for '; this was added in the 2001 edition. The Real Academia Española specifically advises against using ' exclusively for U.S. nationals:
Canada.
Modern Canadians typically refer to people from the United States as 'Americans', though they seldom refer to the United States as 'America'; they use the terms 'the United States', 'the U.S.', or (informally) 'the States' instead. Canadians rarely apply the term 'American' to themselves – some Canadians resent either being referred to as Americans because of mistaken assumptions that they are U.S. citizens or others' inability, particularly of those overseas, to distinguish Canadian from American accents. Some Canadians have protested the use of 'American' as a national demonym. People of U.S. ethnic origin in Canada are categorized as 'Other North American origins' by Statistics Canada for purposes of census counts (as opposed to 'Canadian').
Portugal and Brazil.
Generally, ' denotes 'U.S. citizen' in Portugal. Usage of ' to exclusively denote people and things of the U.S. is discouraged by the Lisbon Academy of Sciences, because the specific word ' (also ') clearly denotes a person from the United States. The term currently used by the Portuguese press is '.
In Brazil, the term ' is used to address both that which pertains to the American continent and, in current speech, that which pertains to the U.S.; the particular meaning is deduced from context. Alternatively, the term ' ('North American') is also used in more informal contexts, while ' (of the U.S.) is the preferred form in academia. Use of the three terms is common in schools, government, and media. The term ' is used almost exclusively for the continent, and the U.S. is called ' ('United States') or ' ('United States of America'), often abbreviated '.
The Getting Through Customs website advises business travelers not to use 'in America' as a U.S. reference when conducting business in Brazil.
In other contexts.
'American' in the 1994 'Associated Press Stylebook' was defined as, 'An acceptable description for a resident of the United States. It also may be applied to any resident or citizen of nations in North or South America.' Elsewhere, the 'AP Stylebook' indicates that 'United States' must 'be spelled out when used as a noun. Use U.S. (no space) only as an adjective.'
The entry for 'America' in 'The New York Times Manual of Style and Usage' from 1999 reads:
Media releases from the Pope and Holy See frequently use 'America' to refer to the United States, and 'American' to denote something or someone from the United States.
International law.
At least one international law uses 'U.S. citizen' in defining a citizen of the United States rather than 'American citizen'; for example, the English version of the North American Free Trade Agreement includes:
Many international treaties use the terms 'American' and 'American citizen':
U.S. commercial regulation.
Products that are labeled, advertised, and marketed in the U.S. as 'Made in the USA' must be, as set by the Federal Trade Commission (FTC), 'all or virtually all made in the U.S.' The FTC, to prevent deception of customers and unfair competition, considers an unqualified claim of 'American Made' to expressly claim exclusive manufacture in the U.S: 'The FTC Act gives the Commission the power to bring law enforcement actions against false or misleading claims that a product is of U.S. origin.'
Alternatives.
There are a number of alternatives to the demonym 'American' as a citizen of the United States that do not simultaneously mean any inhabitant of the Americas. One uncommon alternative is 'Usonian', which usually describes a certain style of residential architecture designed by Frank Lloyd Wright. Other alternatives have also surfaced, but most have fallen into disuse and obscurity. 'Merriam-Webster's Dictionary of English Usage' says:
Nevertheless, no alternative to 'American' is common.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1242'>
Ada (programming language)
Ada is a structured, statically typed, imperative, wide-spectrum, and object-oriented high-level computer programming language, extended from Pascal and other languages. It has built-in language support for explicit concurrency, offering tasks, synchronous message passing, protected objects, and non-determinism. Ada is an international standard; the current version (known as Ada 2012) is defined by ISO/IEC 8652:2012.
Ada was originally designed by a team led by Jean Ichbiah of CII Honeywell Bull under contract to the United States Department of Defense (DoD) from 1977 to 1983 to supersede the hundreds of programming languages then used by the DoD. Ada was named after Ada Lovelace (1815–1852), who is credited as being the first computer programmer.
Features.
Ada was originally targeted at embedded and real-time systems. The Ada 95 revision, designed by S. Tucker Taft of Intermetrics between 1992 and 1995, improved support for systems, numerical, financial, and object-oriented programming (OOP).
Notable features of Ada include: strong typing, modularity mechanisms (packages), run-time checking, parallel processing (tasks, synchronous Message passing, protected objects, and nondeterministic select statements), exception handling, and generics. Ada 95 added support for object-oriented programming, including dynamic dispatch.
The syntax of Ada minimizes choices of ways to perform basic operations, and prefers English keywords (such as 'or else' and 'and then') to symbols (such as ' ' and '&&'). Ada uses the basic arithmetical operators '+', '-', '*', and '/', but avoids using other symbols. Code blocks are delimited by words such as 'declare', 'begin', and 'end', whereas the 'end' (in most cases) is followed by the identifier of the block it closes (e.g. 'if .. end if', 'loop .. end loop'). In the case of conditional blocks this avoids a 'dangling else' that could pair with the wrong nested if-expression in other languages like C or Java.
Ada is designed for development of very large software systems. Ada packages can be compiled separately. Ada package specifications (the package interface) can also be compiled separately without the implementation to check for consistency. This makes it possible to detect problems early during the design phase, before implementation starts.
A large number of compile-time checks are supported to help avoid bugs that would not be detectable until run-time in some other languages or would require explicit checks to be added to the source code. For example, the syntax requires explicitly named closing of blocks to prevent errors due to mismatched end tokens. The adherence to strong typing allows detection of many common software errors (wrong parameters, range violations, invalid references, mismatched types, etc.) either during compile-time, or otherwise during run-time. As concurrency is part of the language specification, the compiler can in some cases detect potential deadlocks. Compilers also commonly check for misspelled identifiers, visibility of packages, redundant declarations, etc. and can provide warnings and useful suggestions on how to fix the error.
Ada also supports run-time checks to protect against access to unallocated memory, buffer overflow errors, range violations, off-by-one errors, array access errors, and other detectable bugs. These checks can be disabled in the interest of runtime efficiency, but can often be compiled efficiently. It also includes facilities to help program verification. For these reasons, Ada is widely used in critical systems, where any anomaly might lead to very serious consequences, e.g., accidental death, injury or severe financial loss. Examples of systems where Ada is used include avionics, railways, banking, military and space technology.
Ada's dynamic memory management is high-level and type-safe. Ada does not have generic or untyped pointers; nor does it implicitly declare any pointer type. Instead, all dynamic memory allocation and deallocation must take place through explicitly declared 'access types'.
Each access type has an associated 'storage pool' that handles the low-level details of memory management; the programmer can either use the default storage pool or define new ones (this is particularly relevant for Non-Uniform Memory Access). It is even possible to declare several different access types that all designate the same type but use different storage pools.
Also, the language provides for 'accessibility checks', both at compile time and at run time, that ensures that an 'access value' cannot outlive the type of the object it points to.
Though the semantics of the language allow automatic garbage collection of inaccessible objects, most implementations do not support it by default, as it would cause unpredictable behaviour in real-time systems. Ada does support a limited form of region-based memory management; also, creative use of storage pools can provide for a limited form of automatic garbage collection, since destroying a storage pool also destroys all the objects in the pool.
Ada was designed to resemble the English language in its syntax for comments: a double-dash ('--'), resembling an em dash, denotes comment text. Comments stop at end of line, so there is no danger of unclosed comments accidentally voiding whole sections of source code. Comments can be nested: prefixing each line (or column) with '--' will skip all that code, while being clearly denoted as a column of repeated '--' down the page. There is no limit to the nesting of comments, thereby allowing prior code, with commented-out sections, to be commented-out as even larger sections. All Unicode characters are allowed in comments, such as for symbolic formulas (E[0]=m×c²). To the compiler, the double-dash is treated as end-of-line, allowing continued parsing of the language as a context-free grammar.
The semicolon (';') is a statement terminator, and the null or no-operation statement is codice_1. A single codice_2 without a statement to terminate is not allowed.
Unlike most ISO standards, the Ada language definition (known as the 'Ada Reference Manual' or 'ARM', or sometimes the 'Language Reference Manual' or 'LRM') is free content. Thus, it is a common reference for Ada programmers and not just programmers implementing Ada compilers. Apart from the reference manual, there is also an extensive rationale document which explains the language design and the use of various language constructs. This document is also widely used by programmers. When the language was revised, a new rationale document was written.
One notable free software tool that is used by many Ada programmers to aid them in writing Ada source code is the GNAT Programming Studio.
History.
In the 1970s, the US Department of Defense (DoD) was concerned by the number of different programming languages being used for its embedded computer system projects, many of which were obsolete or hardware-dependent, and none of which supported safe modular programming. In 1975, a working group, the High Order Language Working Group (HOLWG), was formed with the intent to reduce this number by finding or creating a programming language generally suitable for the department's requirements. The result was Ada. The total number of high-level programming languages in use for such projects fell from over 450 in 1983 to 37 by 1996.
The HOLWG working group crafted the Steelman language requirements, a series of documents stating the requirements they felt a programming language should satisfy. Many existing languages were formally reviewed, but the team concluded in 1977 that no existing language met the specifications.
Requests for proposals for a new programming language were issued and four contractors were hired to develop their proposals under the names of Red (Intermetrics led by Benjamin Brosgol), Green (CII Honeywell Bull, led by Jean Ichbiah), Blue (SofTech, led by John Goodenough) and Yellow (SRI International, led by Jay Spitzen). In April 1978, after public scrutiny, the Red and Green proposals passed to the next phase. In May 1979, the Green proposal, designed by Jean Ichbiah at CII Honeywell Bull, was chosen and given the name Ada—after Augusta Ada, Countess of Lovelace. This proposal was influenced by the programming language LIS that Ichbiah and his group had developed in the 1970s. The preliminary Ada reference manual
was published in ACM SIGPLAN Notices in June 1979. The Military Standard reference manual was approved on December 10, 1980 (Ada Lovelace's birthday), and
given the number MIL-STD-1815 in honor of Ada Lovelace's birth year. In 1981, C. A. R. Hoare took advantage of his Turing Award speech to criticize Ada for being overly complex and hence unreliable, but subsequently seemed to recant in the foreword he wrote for an Ada textbook.
Ada attracted much attention from the programming community as a whole during its early days. Its backers and others predicted that it might become a dominant language for general purpose programming and not just defense-related work. Ichbiah publicly stated that within ten years, only two programming languages would remain, Ada and Lisp. Early Ada compilers struggled to implement the large, complex language, and both compile-time and run-time performance tended to be slow and tools primitive. Compiler vendors expended most of their efforts in passing the massive, language-conformance-testing, government-required 'ACVC' validation suite that was required in another novel feature of the Ada language effort.
The first validated Ada implementation was the NYU Ada/Ed translator, certified on April 11, 1983. NYU Ada/Ed is implemented in the high-level set language SETL. A number of commercial companies began offering Ada compilers and associated development tools, including Alsys, Telesoft, DDC-I, Advanced Computer Techniques, Tartan Laboratories, TLD Systems, and others.
In 1987, the US Department of Defense began to require the use of Ada (the 'Ada mandate') for every software project where new code was more than 30% of result, though exceptions to this rule were often granted.
By the late 1980s and early 1990s, Ada compilers had improved in performance, but there were still barriers to full exploitation of Ada's abilities, including a tasking model that was different from what most real-time programmers were used to.
The Department of Defense Ada mandate was effectively removed in 1997, as the DoD began to embrace COTS (commercial off-the-shelf) technology. Similar requirements existed in other NATO countries.
Because of Ada's safety-critical support features, it is now used not only for military applications, but also in commercial projects where a software bug can have severe consequences, e.g. avionics and air traffic control, commercial rockets (e.g. Ariane 4 and 5), satellites and other space systems, railway transport and banking.
For example, the fly-by-wire system software in the Boeing 777 was written in Ada. The Canadian Automated Air Traffic System was written in 1 million lines of Ada (SLOC count). It featured advanced distributed processing, a distributed Ada database, and object-oriented design. Ada is also used in other air traffic systems, e.g. the UK’s next-generation Interim Future Area Control Tools Support (iFACTS) air traffic control system is designed and implemented using SPARK Ada.
It is also used in the French TVM in-cab signalling system on the TGV high-speed rail system, and the metro suburban trains in Paris, London, Hong Kong and New York City.
Standardization.
The language became an ANSI standard in 1983 (), and without any further changes became
an ISO standard in 1987 (ISO-8652:1987). This version of the language is commonly known as Ada 83, from the date of its adoption by ANSI, but is sometimes referred to also as Ada 87, from the date of its adoption by ISO.
Ada 95, the joint ISO/ANSI standard () was published in February 1995, making Ada 95 the first ISO standard object-oriented programming language. To help with the standard revision and future acceptance, the US Air Force funded the development of the GNAT Compiler. Presently, the GNAT Compiler is part of the GNU Compiler Collection.
Work has continued on improving and updating the technical content of the Ada programming language. A Technical Corrigendum to Ada 95 was published in October 2001, and a major Amendment, was published on March 9, 2007. At the Ada-Europe 2012 conference in Stockholm, the Ada Resource Association (ARA) and Ada-Europe announced the completion of the design of the latest version of the Ada programming language and the submission of the reference manual to the International Organization for Standardization (ISO) for approval. ISO/IEC 8652:2012 was published in December 2012.
Other related standards include ISO 8651-3:1988 'Information processing systems—Computer graphics—Graphical Kernel System (GKS) language bindings—Part 3: Ada'.
Language constructs.
Ada is an ALGOL-like programming language featuring control structures with reserved words such as 'if', 'then', 'else', 'while', 'for', and so on. However, Ada also has many data structuring facilities and other abstractions which were not included in the original ALGOL 60, such as type definitions, records, pointers, enumerations. Such constructs were in part inherited or inspired from Pascal.
'Hello, world!' in Ada.
A common example of a language's syntax is the Hello world program:
with Ada.Text_IO; use Ada.Text_IO;
procedure Hello is
begin
Put_Line ('Hello, world!');
end Hello;
This program can be compiled by using the freely available open source compiler GNAT, by executing
gnatmake hello.adb
Data types.
Ada's type system is not based on a set of predefined primitive types but allows users to declare their own types. This declaration in turn is not based on the internal representation of the type but on describing the goal which should be achieved. This allows the compiler to determine a suitable memory size for the type, and to check for violations of the type definition at compile time and run time (i.e. range violations, buffer overruns, type consistency, etc.). Ada supports numerical types defined by a range, modulo types, aggregate types (records and arrays), and enumeration types. Access types define a reference to an instance of a specified type; untyped pointers are not permitted.
Special types provided by the language are task types and protected types.
For example a date might be represented as:
type Day_type is range 1 . 31;
type Month_type is range 1 . 12;
type Year_type is range 1800 . 2100;
type Hours is mod 24;
type Weekday is (Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday);
type Date is
record
Day : Day_type;
Month : Month_type;
Year : Year_type;
end record;
Types can be refined by declaring subtypes:
subtype Working_Hours is Hours range 0 . 12; -- at most 12 Hours to work a day
subtype Working_Day is Weekday range Monday . Friday; -- Days to work
Work_Load: constant array(Working_Day) of Working_Hours -- implicit type declaration
:= (Friday => 6, Monday => 4, others => 10); -- lookup table for working hours with initialization
Types can have modifiers such as 'limited, abstract, private' etc. Private types can only be accessed and limited types can only be modified or copied within the scope of the package that defines them.
Ada 95 adds additional features for object-oriented extension of types.
Control structures.
Ada is a structured programming language, meaning that the flow of control is structured into standard statements. All standard constructs and deep level early exit are supported so the use of the also supported 'go to' commands is seldom needed.
-- while a is not equal to b, loop.
while a /= b loop
Ada.Text_IO.Put_Line ('Waiting');
end loop;
if a > b then
Ada.Text_IO.Put_Line ('Condition met');
else
Ada.Text_IO.Put_Line ('Condition not met');
end if;
for i in 1 . 10 loop
Ada.Text_IO.Put ('Iteration: ');
Ada.Text_IO.Put (i);
Ada.Text_IO.Put_Line;
end loop;
loop
a := a + 1;
exit when a = 10;
end loop;
case i is
when 0 => Ada.Text_IO.Put ('zero');
when 1 => Ada.Text_IO.Put ('one');
when 2 => Ada.Text_IO.Put ('two');
-- case statements have to cover all possible cases:
when others => Ada.Text_IO.Put ('none of the above');
end case;
for aWeekday in Weekday'Range loop -- loop over an enumeration
Put_Line ( Weekday'Image(aWeekday) ); -- output string representation of an enumeration
if aWeekday in Working_Day then -- check of a subtype of an enumeration
Put_Line ( ' to work for ' &
Working_Hours'Image (Work_Load(aWeekday)) ); -- access into a lookup table
end if;
end loop;
Packages, procedures and functions.
Among the parts of an Ada program are packages, procedures and functions.
Example:
Package specification (example.ads)
package Example is
type Number is range 1 . 11;
procedure Print_and_Increment (j: in out Number);
end Example;
Package body (example.adb)
with Ada.Text_IO;
package body Example is
i : Number := Number'First;
procedure Print_and_Increment (j: in out Number) is
function Next (k: in Number) return Number is
begin
return k + 1;
end Next;
begin
Ada.Text_IO.Put_Line ( 'The total is: ' & Number'Image(j) );
j := Next (j);
end Print_and_Increment;
-- package initialization executed when the package is elaborated
begin
while i < Number'Last loop
Print_and_Increment (i);
end loop;
end Example;
This program can be compiled e.g. by using the freely available open source compiler GNAT, by executing
gnatmake -z example.adb
Packages, procedures and functions can nest to any depth and each can also be the logical outermost block.
Each package, procedure or function can have its own declarations of constants, types, variables, and other procedures, functions and packages, which can be declared in any order.
Concurrency.
Ada has language support for task-based concurrency. The fundamental concurrent unit in Ada is a 'task' which is a built-in limited type. Tasks are specified in two parts - the task declaration defines the task interface (similar to a type declaration), the task body specifies the implementation of the task.
Depending on the implementation, Ada tasks are either mapped to operating system tasks or processes, or are scheduled internally by the Ada runtime.
Tasks can have entries for synchronisation (a form of synchronous message passing). Task entries are declared in the task specification. Each task entry can have one or more 'accept' statements within the task body. If the control flow of the task reaches an accept statement, the task is blocked until the corresponding entry is called by another task (similarly, a calling task is blocked until the called task reaches the corresponding accept statement). Task entries can have parameters similar to procedures, allowing tasks to synchronously exchange data. In conjunction with 'select' statements it is possible to define 'guards' on accept statements (similar to Dijkstra's guarded commands).
Ada also offers 'protected objects' for mutual exclusion. Protected objects are a monitor-like construct, but use guards instead of conditional variables for signaling (similar to conditional critical regions). Protected objects combine the data encapsulation and safe mutual exclusion from monitors, and entry guards from conditional critical regions. The main advantage over classical monitors is that conditional variables are not required for signaling, avoiding potential deadlocks due to incorrect locking semantics. Like tasks, the protected object is a built-in limited type, and it also has a declaration part and a body.
A protected object consists of encapsulated private data (which can only be accessed from within the protected object), and procedures, functions and entries which are guaranteed to be mutually exclusive (with the only exception of functions, which are required to be side effect free and can therefore run concurrently with other functions). A task calling a protected object is blocked if another task is currently executing inside the same protected object, and released when this other task leaves the protected object. Blocked tasks are queued on the protected object ordered by time of arrival.
Protected object entries are similar to procedures, but additionally have 'guards'. If a guard evaluates to false, a calling task is blocked and added to the queue of that entry; now another task can be admitted to the protected object, as no task is currently executing inside the protected object. Guards are re-evaluated whenever a task leaves the protected object, as this is the only time when the evaluation of guards can have changed.
Calls to entries can be 'requeued' to other entries with the same signature. A task that is requeued is blocked and added to the queue of the target entry; this means that the protected object is released and allows admission of another task.
The 'select' statement in Ada can be used to implement non-blocking entry calls and accepts, non-deterministic selection of entries (also with guards), time-outs and aborts.
The following example illustrates some concepts of concurrent programming in Ada.
with Ada.Text_IO; use Ada.Text_IO;
procedure Traffic is
type Airplane_ID is range 1.10; -- 10 airplanes
task type Airplane (ID: Airplane_ID); -- task representing airplanes, with ID as initialisation parameter
type Airplane_Access is access Airplane; -- reference type to Airplane
protected type Runway is -- the shared runway (protected to allow concurrent access)
entry Assign_Aircraft (ID: Airplane_ID); -- all entries are guaranteed mutually exclusive
entry Cleared_Runway (ID: Airplane_ID);
entry Wait_For_Clear;
private
Clear: Boolean := True; -- protected private data - generally more than just a flag..
end Runway;
type Runway_Access is access all Runway;
-- the air traffic controller task takes requests for takeoff and landing
task type Controller (My_Runway: Runway_Access) is
-- task entries for synchronous message passing
entry Request_Takeoff (ID: in Airplane_ID; Takeoff: out Runway_Access);
entry Request_Approach(ID: in Airplane_ID; Approach: out Runway_Access);
end Controller;
-- allocation of instances
Runway1 : aliased Runway; -- instantiate a runway
Controller1: Controller (Runway1'Access); -- and a controller to manage it
------ the implementations of the above types ------
protected body Runway is
entry Assign_Aircraft (ID: Airplane_ID)
when Clear is -- the entry guard - calling tasks are blocked until the condition is true
begin
Clear := False;
Put_Line (Airplane_ID'Image (ID) & ' on runway ');
end;
entry Cleared_Runway (ID: Airplane_ID)
when not Clear is
begin
Clear := True;
Put_Line (Airplane_ID'Image (ID) & ' cleared runway ');
end;
entry Wait_For_Clear
when Clear is
begin
null; -- no need to do anything here - a task can only enter if 'Clear' is true
end;
end Runway;
task body Controller is
begin
loop
My_Runway.Wait_For_Clear; -- wait until runway is available (blocking call)
select -- wait for two types of requests (whichever is runnable first)
when Request_Approach'count = 0 => -- guard statement - only accept if there are no tasks queuing on Request_Approach
accept Request_Takeoff (ID: in Airplane_ID; Takeoff: out Runway_Access)
do -- start of synchronized part
My_Runway.Assign_Aircraft (ID); -- reserve runway (potentially blocking call if protected object busy or entry guard false)
Takeoff := My_Runway; -- assign 'out' parameter value to tell airplane which runway
end Request_Takeoff; -- end of the synchronised part
or
accept Request_Approach (ID: in Airplane_ID; Approach: out Runway_Access) do
My_Runway.Assign_Aircraft (ID);
Approach := My_Runway;
end Request_Approach;
or -- terminate if no tasks left who could call
terminate;
end select;
end loop;
end;
task body Airplane is
Rwy : Runway_Access;
begin
Controller1.Request_Takeoff (ID, Rwy); -- This call blocks until Controller task accepts and completes the accept block
Put_Line (Airplane_ID'Image (ID) & ' taking off..');
delay 2.0;
Rwy.Cleared_Runway (ID); -- call will not block as 'Clear' in Rwy is now false and no other tasks should be inside protected object
delay 5.0; -- fly around a bit..
loop
select -- try to request a runway
Controller1.Request_Approach (ID, Rwy); -- this is a blocking call - will run on controller reaching accept block and return on completion
exit; -- if call returned we're clear for landing - leave select block and proceed..
or
delay 3.0; -- timeout - if no answer in 3 seconds, do something else (everything in following block)
Put_Line (Airplane_ID'Image (ID) & ' in holding pattern'); -- simply print a message
end select;
end loop;
delay 4.0; -- do landing approach..
Put_Line (Airplane_ID'Image (ID) & ' touched down!');
Rwy.Cleared_Runway (ID); -- notify runway that we're done here.
end;
New_Airplane: Airplane_Access;
begin
for I in Airplane_ID'Range loop -- create a few airplane tasks
New_Airplane := new Airplane (I); -- will start running directly after creation
delay 4.0;
end loop;
end Traffic;
Pragmas.
A pragma is a compiler directive that conveys information to the compiler to allow specific manipulation of compiled output. Certain pragmas are built into the language while other are implementation-specific.
Examples of common usage of compiler pragmas would be to disable certain features, such as run-time type checking or array subscript boundary checking, or to instruct the compiler to insert object code in lieu of a function call (as C/C++ does with inline functions).
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1247'>
Alfonso Cuarón
Alfonso Cuarón Orozco (; born November 28, 1961) is a Mexican film director, screenwriter, producer and editor best known for his films 'A Little Princess' (1995), 'Y Tu Mamá También' (2001), 'Harry Potter and the Prisoner of Azkaban' (2004), 'Children of Men' (2006), and 'Gravity' (2013). His fantasy adventure series' Believe' is currently being broadcast on NBC.
Most of his work has been praised by both audience and critics, and he has been nominated for six Academy Awards including Best Original Screenplay for 'Y Tu Mamá También', Best Adapted Screenplay and Best Film Editing for 'Children of Men', and Best Picture for 'Gravity,' winning Best Director and Best Film Editing for 'Gravity'. For the same film, he also won the Golden Globe Award for Best Director and the BAFTA Awards for Best British Film and Best Direction. He also won a BAFTA Award for Best Film not in the English Language as one of the producers of Guillermo del Toro's 'Pan's Labyrinth'.
Cuarón's brother Carlos, as well as his son Jonás, are writers and directors as well and both acted as co-writers in some of his works. He is also friends with fellow Mexican directors Guillermo del Toro and Alejandro González Iñárritu, collectively known as 'The Three Amigos of Cinema.'
Early life.
Alfonso Cuarón was born in Mexico City, and is the son of Alfredo Cuarón, a nuclear physicist who worked for the United Nations' International Atomic Energy Agency for many years. He has two brothers, Carlos, also a filmmaker, and Alfredo, a conservation biologist.
Cuarón studied Philosophy at the National Autonomous University of Mexico (UNAM) and filmmaking at CUEC (Centro Universitario de Estudios Cinematográficos), a school within the same University. There, he met director Carlos Marcovich and cinematographer Emmanuel Lubezki, and they made what would be his first short film, 'Vengeance Is Mine'.
Career.
Cuarón began working in television in Mexico, first as a technician and then as a director. His television work led to assignments as an assistant director for several Latin American film productions including ' and 'Romero', and in 1991, he landed his first big-screen directorial assignment. On January 12, 2014, Alfonso accepted the Golden Globe Award in the category Best Director for Gravity (The 71st Annual Golden Globe Awards, 2014). He also won two Oscars for Best Film Editing and Best Director.
'Sólo con tu pareja'.
'Sólo con tu pareja' was a sex comedy about a womanizing businessman (played by Daniel Giménez Cacho) who, after spurning an attractive nurse, is fooled into believing he's contracted AIDS. In addition to writing, producing and directing, Cuarón co-edited the film with Luis Patlán. It is somewhat unusual for directors to be credited co-editors, although the Coen Brothers and Robert Rodriguez have both directed and edited nearly all of their films. Cuarón continued this close involvement in editing on several of his later films.
The film, which also starred cabaret singer Astrid Hadad and model/actress Claudia Ramírez (with whom Cuarón was linked between 1989 and 1993), was a big hit in Mexico. After this success, director Sydney Pollack hired Cuarón to direct an episode of 'Fallen Angels', a series of neo-noir stories produced for the Showtime premium cable network in 1993; other directors who worked on the series included Steven Soderbergh, Jonathan Kaplan, Peter Bogdanovich and Tom Hanks.
International success.
In 1995, Cuarón released his first feature film produced in the United States, 'A Little Princess', an adaptation of Frances Hodgson Burnett's classic novel. Cuarón's next feature was also a literary adaptation, a modernized version of Charles Dickens's 'Great Expectations' starring Ethan Hawke, Gwyneth Paltrow and Robert De Niro.
Cuarón's next project found him returning to Mexico with a Spanish-speaking cast to film 'Y Tu Mamá También', starring Gael García Bernal, Diego Luna and Maribel Verdú. It was a provocative and controversial road comedy about two sexually obsessed teenagers who take an extended road trip with an attractive married woman in her late twenties. The film's open portrayal of sexuality and frequent rude humor, as well as the politically and socially relevant asides, made the film an international hit and a major success with critics. Cuarón shared an Academy Award nomination for Best Original Screenplay with co-writer and brother Carlos Cuarón.
In 2003, Cuarón directed the third film in the successful 'Harry Potter' series, 'Harry Potter and the Prisoner of Azkaban'. Cuarón faced criticism from some of the more purist 'Harry Potter' fans for his approach to the film. At the time of the movie's release, however, author J. K. Rowling, who had seen and loved Cuarón's film 'Y Tu Mamá También', said that it was her personal favorite from the series so far. Critically, the film was also better received than the first two installments, with some critics remarking that it was the first 'Harry Potter' film to truly capture the essence of the novels. It remained as the most critically acclaimed film of the 'Harry Potter' film franchise until the release of 'Harry Potter and the Deathly Hallows – Part 2'.
Cuarón's feature 'Children of Men', an adaptation of the P. D. James novel starring Clive Owen, Julianne Moore and Michael Caine, received wide critical acclaim, including three Academy Award nominations. Cuarón himself received two nominations for his work on the film in Best Film Editing (with Alex Rodríguez) and Best Adapted Screenplay (with several collaborators).
He created the production and distribution company Esperanto Filmoj ('Esperanto Films', named because of his support for the international language Esperanto), which has credits in the films 'Duck Season', 'Pan's Labyrinth', and 'Gravity'.
Cuarón also directed the controversial public service announcement 'I Am Autism' for Autism Speaks that was sharply criticized by disability rights groups for its negative portrayal of autism.
In 2010, Cuarón began to develop the film 'Gravity', a drama set in space. He was joined by producer David Heyman, with whom Cuarón worked on 'Harry Potter and the Prisoner of Azkaban'. Starring Sandra Bullock and George Clooney, the film was released in the fall of 2013 and opened the 70th Venice International Film Festival in August. The film received ten Academy Award nominations, including Best Picture and Best Director. Cuarón won the Academy Award for Best Director, becoming the first Latino to win the award, while he and Mark Sanger shared the award for Best Film Editing.
In 2013, Cuarón created 'Believe', a science fiction/fantasy/adventure series that is being broadcast as part of the 2013–14 United States network television schedule on NBC as a mid-season entry. The series was created by Cuarón for Bad Robot Productions and Warner Bros. Television. In 2014, 'TIME' placed him in its list of '100 Most Influential People in the World' - Pioneers.
Personal life.
Cuarón has been living in London since 2000. He was married to Italian actress and freelance journalist Annalisa Bugliani from 2001 to 2008. They have two children: daughter Tess Bu Cuarón (born 2003) and son Olmo Teodoro Cuarón (born 2005).
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1252'>
Arianism
Arianism is the nontrinitarian, theological teaching attributed to Arius (c. AD 250–336), a Christian presbyter in Alexandria, Egypt, concerning the relationship of God the Father to the Son of God, Jesus Christ. Arius asserted that the Son of God was a subordinate entity to God the Father. Deemed a heretic by the Ecumenical First Council of Nicaea of 325, Arius was later exonerated in 335 at the regional First Synod of Tyre, and then, after his death, pronounced a heretic again at the Ecumenical First Council of Constantinople of 381. The Roman Emperors Constantius II (337–361) and Valens (364–378) were Arians or Semi-Arians.
The Arian concept of Christ is that the Son of God did not always exist, but was created by—and is therefore distinct from—God the Father. This belief is grounded in the Gospel of John () passage: 'You heard me say, 'I am going away and I am coming back to you.' If you loved me, you would be glad that I am going to the Father, for the Father is greater than I.'
Arianism is defined as those teachings attributed to Arius, supported by the Council of Rimini, which are in opposition to the post-Nicaean Trinitarian Christological doctrine, as determined by the first two Ecumenical Councils and currently maintained by the Roman Catholic Church, the Eastern Orthodox Church, the Oriental Orthodox Churches, the Assyrian Church of the East, all Reformation-founded Protestant churches (Lutheran, Reformed/Presbyterian, and Anglican), and a large majority of groups founded after the Reformation and calling themselves Protestant (such as Methodist, Baptist, most Pentecostals). Modern Christian groups which may be seen as espousing some of the principles of Arianism include Unitarians, Oneness Pentecostals, The Church of Jesus Christ of Latter-day Saints, Jehovah's Witnesses, Iglesia ni Cristo and Branhamism, though the origins of their beliefs are not necessarily attributed to the teachings of Arius. 'Arianism' is also often used to refer to other nontrinitarian theological systems of the 4th century, which regarded Jesus Christ—the Son of God, the Logos—as either a created being (as in Arianism proper and Anomoeanism), or as neither uncreated nor created in the sense other beings are created (as in Semi-Arianism).
Origin.
Arius taught that God the Father and the Son of God did not always exist together eternally. Arians taught that the Logos was a divine being created by God the Father before the world. The Son of God is subordinate to God the Father. In English-language works, it is sometimes said that Arians believe that Jesus is or was a 'creature', in the sense of 'created being'. Arius and his followers appealed to Bible verses such as Jesus saying that the father is 'greater than I' (John ), and 'The LORD/Yahweh created me at the beginning of his work' (Proverbs ).
Controversy over Arianism arose in the late 3rd century and persisted throughout most of the 4th century. It involved most church members—from simple believers, priests and monks to bishops, emperors and members of Rome's imperial family. Such a deep controversy within the Church during this period of its development could not have materialized without significant historical influences providing a basis for the Arian doctrines. Some historians define and minimize the Arian conflict as the exclusive construct of Arius and a handful of rogue bishops engaging in heresy; but others reinvent Arius as a defender of 'original' Christianity, or as providing a conservative response against the politicization of Christianity seeking union with the Roman Empire. Of the roughly three hundred bishops in attendance at the Council of Nicea, only two bishops did not sign the Nicene Creed, which condemned Arianism. Two Roman emperors, Constantius II and Valens, became Arians, as did prominent Gothic, Vandal and Lombard warlords both before and after the fall of the Western Roman Empire.
Lucian of Antioch had contended for a Christology very similar to what would later be known as Arianism and is thought to have influenced its development. (Arius was a student of Lucian's private academy in Antioch.) After the dispute over Arianism became politicized and a general solution to the divisiveness was sought—with a great majority holding to the Trinitarian position—the Arian position was officially declared heterodox.
Arianism continued to exist for several decades, even within the family of the emperor, the imperial nobility, and higher-ranking clergy. But, by the end of the 4th century it had surrendered its remaining ground to Trinitarianism in the official Roman church hierarchy. In western Europe, Arianism, which had been taught by Ulfilas, the Arian missionary to the Germanic tribes, was dominant among the Goths and Lombards (and, significantly for the late Empire, the Vandals); but it ceased to be the mainstream belief by the 8th century, as the rulers of these Germanic tribes gradually adopted Catholicism, beginning with Clovis I of the Franks in 496, then Reccared I of the Visigoths in 587 and Aripert I of the Lombards in 653. It was crushed through a series of military and political conquests, culminating in religious 'and' political domination of Europe over the next 1,000 years by Trinitarian forces in the Catholic Church. Trinitarianism has remained the dominant doctrine in all major branches of the Eastern and Western Church and later within Protestantism.
Beliefs.
Virtually all extant written material on Arianism is criticism and refutations written by opponents, with most literature written by Arian advocates long having been destroyed by the Trinitarian churches. As such the original teachings of Arius and his followers are difficult to define precisely today.
Arians do not believe in the traditional doctrine of the Trinity, which holds that God encompasses three persons in one being. The letter of Arian Auxentius regarding the Arian missionary Ulfilas, gives the clearest picture of Arian beliefs. Arian Ulfilas, who was ordained a bishop by Arian Eusebius of Nicomedia and returned to his people to work as a missionary, believed: God, the Father, ('unbegotten' God; Almighty God) always existing and who is the only true God (John 17:3). The Son of God, Jesus Christ, ('only-begotten God' John 1:18; Mighty God Isaiah 9:6) begotten before time began (Proverbs 8:22-29; Revelation 3:14; Colossians 1:15) and who is Lord/Master (1 Cor 8:6). The Holy Spirit (the illuminating and sanctifying power, who is neither God nor Lord/Master. First Corinthians - was cited as proof text:
The creed of Arian Ulfilas (c. 311 – 383), which concludes a letter praising him written by Auxentius, distinguishes God the Father ('unbegotten'), who is the only true God from Son of God ('only-begotten'), who is Lord/Master; and the Holy Spirit (the illuminating and sanctifying power), who is neither God nor Lord/Master:
The creed of Arian Ulfila in Latin:
A letter from Arius (c. 250–336) to the Arian Eusebius of Nicomedia (died 341) succinctly states the core beliefs of the Arians:
First Council of Nicaea and its aftermath.
In 321, Arius was denounced by a synod at Alexandria for teaching a heterodox view of the relationship of Jesus to God the Father. Because Arius and his followers had great influence in the schools of Alexandria—counterparts to modern universities or seminaries—their theological views spread, especially in the eastern Mediterranean.
By 325, the controversy had become significant enough that the Emperor Constantine called an assembly of bishops, the First Council of Nicaea, which condemned Arius' doctrine and formulated the original Nicene Creed of 325. The Nicene Creed's central term, used to describe the relationship between the Father and the Son, is Homoousios (), or Consubstantiality, meaning 'of the same substance' or 'of one being'. (The Athanasian Creed is less often used but is a more overtly anti-Arian statement on the Trinity.)
The focus of the Council of Nicaea was the nature of the Son of God, and his precise relationship to God the Father. (see Paul of Samosata and the Synods of Antioch). Arius taught that Jesus Christ was divine/holy and was sent to earth for the salvation of mankind but that Jesus Christ was not equal to God the Father (infinite, primordial origin)in rank 'and' that God the Father and the Son of God were not equal to the Holy Spirit (power of God the Father). Under Arianism, Christ was instead not consubstantial with God the Father since both the Father and the Son under Arius were made of 'like' essence or being (see homoiousia) but not of the same essence or being (see homoousia). God the Father is a Deity and is divine 'and' the Son of God is not a Deity but divine (I, the LORD, am Deity alone. Isaiah 46:9). God the Father sent Jesus to earth for salvation of mankind (John 17:3). Ousia is essence or being, in Eastern Christianity, and is the aspect of God that is completely incomprehensible to mankind and human perception. It is all that subsists by itself and which has not its being in another, God the Father and God the Son and God the Holy Spirit all being uncreated.
According to the teaching of Arius, the preexistent Logos and thus the incarnate Jesus Christ was a created being; that only the Son was directly created and begotten by God the Father, before ages, but was of a distinct, though similar, essence or substance from the Creator; his opponents argued that this would make Jesus less than God, and that this was heretical. Much of the distinction between the differing factions was over the phrasing that Christ expressed in the New Testament to express submission to God the Father. The theological term for this submission is kenosis. This Ecumenical council declared that Jesus Christ was a distinct being of God in existence or reality (hypostasis), which the Latin fathers translated as persona. Jesus was God in essence, being and or nature (ousia), which the Latin fathers translated as substantia.
Constantine is believed to have exiled those who refused to accept the Nicean creed—Arius himself, the deacon Euzoios, and the Libyan bishops Theonas of Marmarica and Secundus of Ptolemais—and also the bishops who signed the creed but refused to join in condemnation of Arius, Eusebius of Nicomedia and Theognis of Nicaea. The Emperor also ordered all copies of the 'Thalia', the book in which Arius had expressed his teachings, to be burned. However, there is no evidence that his son and ultimate successor, Constantius II, who was an Arian Christian, was exiled.
Although he was committed to maintaining what the church had defined at Nicaea, Constantine was also bent on pacifying the situation and eventually became more lenient toward those condemned and exiled at the council. First he allowed Eusebius of Nicomedia, who was a protégé of his sister, and Theognis to return once they had signed an ambiguous statement of faith. The two, and other friends of Arius, worked for Arius' rehabilitation. At the First Synod of Tyre in AD 335, they brought accusations against Athanasius, bishop of Alexandria, the primary opponent of Arius; after this, Constantine had Athanasius banished, since he considered him an impediment to reconciliation. In the same year, the Synod of Jerusalem under Constantine's direction readmitted Arius to communion in AD 336. Arius, however, died on the way to this event in Constantinople. Some scholars suggest that Arius may have been poisoned by his opponents. Eusebius and Theognis remained in the Emperor's favour, and when Constantine, who had been a catechumen much of his adult life, accepted baptism on his deathbed, it was from Eusebius of Nicomedia.
Theological debates.
The Council of Nicaea did not end the controversy, as many bishops of the Eastern provinces disputed the 'homoousios', the central term of the Nicene creed, as it had been used by Paul of Samosata, who had advocated a monarchianist Christology. Both the man and his teaching, including the term 'homoousios', had been condemned by the Synods of Antioch in 269.
Hence, after Constantine's death in 337, open dispute resumed again. Constantine's son Constantius II, who had become Emperor of the eastern part of the Empire, actually encouraged the Arians and set out to reverse the Nicene creed. His advisor in these affairs was Eusebius of Nicomedia, who had already at the Council of Nicea been the head of the Arian party, who also was made bishop of Constantinople.
Constantius used his power to exile bishops adhering to the Nicene creed, especially St Athanasius of Alexandria, who fled to Rome. In 355 Constantius became the sole Emperor and extended his pro-Arian policy toward the western provinces, frequently using force to push through his creed, even exiling Pope Liberius and installing Antipope Felix II.
As debates raged in an attempt to come up with a new formula, three camps evolved among the opponents of the Nicene creed. The first group mainly opposed the Nicene terminology and preferred the term 'homoiousios' (alike in substance) to the Nicene 'homoousios', while they rejected Arius and his teaching and accepted the equality and coeternality of the persons of the Trinity. Because of this centrist position, and despite their rejection of Arius, they were called 'semi-Arians' by their opponents. The second group also avoided invoking the name of Arius, but in large part followed Arius' teachings and, in another attempted compromise wording, described the Son as being like ('homoios') the Father. A third group explicitly called upon Arius and described the Son as unlike ('anhomoios') the Father. Constantius wavered in his support between the first and the second party, while harshly persecuting the third.
The debates among these groups resulted in numerous synods, among them the Council of Sardica in 343, the Council of Sirmium in 358 and the double Council of Rimini and Seleucia in 359, and no fewer than fourteen further creed formulas between 340 and 360, leading the pagan observer Ammianus Marcellinus to comment sarcastically: 'The highways were covered with galloping bishops.' None of these attempts were acceptable to the defenders of Nicene orthodoxy: writing about the latter councils, Saint Jerome remarked that the world 'awoke with a groan to find itself Arian.'
After Constantius' death in 361, his successor Julian, a devotee of Rome's pagan gods, declared that he would no longer attempt to favor one church faction over another, and allowed all exiled bishops to return; this resulted in further increasing dissension among Christians. The Emperor Valens, however, revived Constantius' policy and supported the 'Homoian' party, exiling bishops and often using force. During this persecution many bishops were exiled to the other ends of the Empire, (e.g., St Hilary of Poitiers to the Eastern provinces). These contacts and the common plight subsequently led to a rapprochement between the Western supporters of the Nicene creed and the 'homoousios' and the Eastern semi-Arians.
Theodosius and the Council of Constantinople.
It was not until the co-reigns of Gratian and Theodosius that Arianism was effectively wiped out among the ruling class and elite of the Eastern Empire. Theodosius' wife St Flacilla was instrumental in his campaign to end Arianism. Valens died in the Battle of Adrianople in 378 and was succeeded by Theodosius I, who adhered to the Nicene creed. This allowed for settling the dispute.
Two days after Theodosius arrived in Constantinople, 24 November 380, he expelled the Homoiousian bishop, Demophilus of Constantinople, and surrendered the churches of that city to Gregory Nazianzus, the leader of the rather small Nicene community there, an act which provoked rioting. Theodosius had just been baptized, by bishop Acholius of Thessalonica, during a severe illness, as was common in the early Christian world. In February he and Gratian had published an edict that all their subjects should profess the faith of the bishops of Rome and Alexandria (i.e., the Nicene faith), or be handed over for punishment for not doing so.
Although much of the church hierarchy in the East had opposed the Nicene creed in the decades leading up to Theodosius' accession, he managed to achieve unity on the basis of the Nicene creed. In 381, at the Second Ecumenical Council in Constantinople, a group of mainly Eastern bishops assembled and accepted the Nicene Creed of 381, which was supplemented in regard to the Holy Spirit, as well as some other changes: see Comparison between Creed of 325 and Creed of 381. This is generally considered the end of the dispute about the Trinity and the end of Arianism among the Roman, non-Germanic peoples.
Later debates.
Epiphanius of Salamis labelled the party of Basil of Ancyra in 358 'Semi-Arianism'. This is considered unfair by Kelly who states that some members of the group were virtually orthodox from the start but disliked the adjective 'homoousios' while others had moved in that direction after the out-and-out Arians had come into the open.
Early medieval Germanic kingdoms.
However, during the time of Arianism's flowering in Constantinople, the Gothic convert Ulfilas (later the subject of the letter of Auxentius cited above) was sent as a missionary to the Gothic barbarians across the Danube, a mission favored for political reasons by emperor Constantius II. Ulfilas' initial success in converting this Germanic people to an Arian form of Christianity was strengthened by later events. When the Germanic peoples entered the Roman Empire and founded successor-kingdoms in the western part, most had been Arian Christians for more than a century.
The conflict in the 4th century had seen Arian and Nicene factions struggling for control of the Church. In contrast, in the Arian German kingdoms established on the wreckage of the Western Roman Empire in the 5th century, there were entirely separate Arian and Nicene Churches with parallel hierarchies, each serving different sets of believers. The Germanic elites were Arians, and the majority population was Nicene. Many scholars see the persistence of Germanic Arianism as a strategy that was followed in order to differentiate the Germanic elite from the local inhabitants and their culture and also to maintain the Germanic elite's separate group identity.
Most Germanic tribes were generally tolerant of the Nicene beliefs of their subjects. However, the Vandals tried for several decades to force their Arian beliefs on their North African Nicene subjects, exiling Nicene clergy, dissolving monasteries, and exercising heavy pressure on non-conforming Christians.
By the beginning of the 8th century, these kingdoms had either been conquered by Nicene neighbors (Ostrogoths, Vandals, Burgundians) or their rulers had accepted Nicene Christianity (Visigoths, Lombards).
The Franks and the Anglo-Saxons were unique among the Germanic peoples in that they entered the empire as pagans and converted to Nicene (Catholic) Christianity directly, guided by their kings, Clovis and Æthelberht of Kent.
Remnants in the West, 5th to 7th centuries.
However, much of southeastern Europe and central Europe, including many of the Goths and Vandals respectively, had embraced Arianism (the Visigoths converted to Arian Christianity in 376), which led to Arianism being a religious factor in various wars in the Roman Empire. In the west, organized Arianism survived in North Africa, in Hispania, and parts of Italy until it was finally suppressed in the 6th and 7th centuries. Grimwald, King of the Lombards (662–671), and his young son and successor Garibald (671), were the last Arian kings in Europe.
'Arian' as a polemical epithet.
The term 'Arian' bestowed by Athanasius upon his opponents in the Christological debate was polemical. Even in Athanasius' Orations against the Arians, Arius hardly emerges consistently as the creative individual originator of the heresy that bears his name, even though it would have greatly strengthened Athanasius' case to present him in that light. Arius was not really very important to general Arianism after his exile at Nicaea. The efforts to get Arius brought out of exile on the parts of Eusebius of Nicomedia were chiefly political concerns and there is little evidence that any of Arius' writings were used as doctrinal norms even in the East. Labels such as 'semi-Arian' or 'neo-Arian' are misleading, for those labelled so would have disavowed the importance of their relation to Arius.
In many ways, the conflict around Arian beliefs in the 4th, 5th and 6th centuries helped firmly define the centrality of the Trinity in Nicene Christian theology. As the first major intra-Christian conflict after Christianity's legalization, the struggle between Nicenes and Arians left a deep impression on the institutional memory of Nicene churches.
Thus, over the past 1,500 years, some Christians have used the term 'Arian' to refer to those groups that see themselves as worshiping Jesus Christ or respecting his teachings, but do not hold to the Nicene creed. Despite the frequency with which this name is used as a polemical label, there has been no historically continuous survival of Arianism into the modern era.
Arianism resurfaces after the Reformation, 16th century.
Following the Protestant Reformation from 1517, it did not take long for Arian and other non-trinitarian views to resurface. The first recorded English antitrinitarian was John Assheton who was forced to recant before Thomas Cranmer in 1548. At the Anabaptist Council of Venice 1550, the early Italian instigators of the Radical Reformation committed to the views of Miguel Servet (d.1553), and these were promulgated by Giorgio Biandrata and others into Poland and Transylvania. The antitrinitarian wing of the Polish Reformation separated from the Calvinist 'ecclesia maior' to form the 'ecclesia minor' or Polish Brethren. These were commonly referred to as 'Arians' due to their rejection of the Trinity, though in fact the Socinians, as they were later known, went further than Arius to the position of Photinus. The epithet 'Arian' was also applied to the early Unitarians such as John Biddle though in denial of the pre-existence of Christ they were again largely Socinians not Arians.
In the 18th century the 'dominant trend' in Britain, particularly in Latitudinarianism, was towards Arianism, with which the names of Samuel Clarke, Benjamin Hoadly, William Whiston and Isaac Newton are associated. To quote the 'Encyclopædia Britannica' article on Arianism: 'In modern times some Unitarians are virtually Arians in that they are unwilling either to reduce Christ to a mere human being or to attribute to him a divine nature identical with that of the Father.' However, their doctrines cannot be considered representative of traditional Arian doctrines or vice-versa.
A similar view was held by the ancient anti-Nicene Pneumatomachi (Greek: , “breath” or “spirit” and “fighters”, combining as “fighters against the spirit”), so called because they opposed the deifying of the Nicene Holy Ghost. However, the Pneumatomachi were adherents of Macedonianism, and though their beliefs were somewhat reminiscent of Arianism, they were distinct enough to be distinguishably different.
The Iglesia ni Cristo is one of the largest groups that teaches a similar doctrine, though they are really closer to Socinianism, believing the Word in John 1:1 is God's plan of salvation, not Christ. So Christ did not preexist.
Another group that may be considered Arian is the Church of God (7th day) - Salem Conference.
Another group that may be considered Arian is the Christian Churches of God.
Other groups opposing the Trinity are not necessarily Arian.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1254'>
August 1
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1256'>
Antoninus Pius
Antoninus Pius (; born 19 September, 86 AD – died 7 March, 161 AD), also known as Antoninus, was Roman Emperor from 138 to 161. He was a member of the Nerva–Antonine dynasty and the Aurelii.
He acquired the name Pius after his accession to the throne, either because he compelled the Senate to deify his adoptive father Hadrian, or because he had saved senators sentenced to death by Hadrian in his later years.
Early life.
Childhood and family.
He was born as the only child of Titus Aurelius Fulvus, consul in 89 whose family came from Nemausus (modern Nîmes). He was born near Lanuvium and his mother was Arria Fadilla. Antoninus’ father and paternal grandfather died when he was young and he was raised by Gnaeus Arrius Antoninus, his maternal grandfather, reputed by contemporaries to be a man of integrity and culture and a friend of Pliny the Younger. His mother married Publius Julius Lupus (a man of consular rank) suffect consul in 98, and two daughters, Arria Lupula and Julia Fadilla, were born from that union.
Marriage and children.
Some time between 110 and 115, he married Annia Galeria Faustina the Elder. They are believed to have enjoyed a happy marriage. Faustina was the daughter of consul Marcus Annius Verus and Rupilia Faustina (a half-sister to Roman Empress Vibia Sabina). Faustina was a beautiful woman, well known for her wisdom. She spent her whole life caring for the poor and assisting the most disadvantaged Romans.
Faustina bore Antoninus four children, two sons and two daughters. They were:
When Faustina died in 141, Antoninus was greatly distressed. In honor of her memory, he asked the Senate to deify her as a goddess, and authorised the construction of a temple to be built in the Roman Forum in her name, with priestesses serving in her temple. He had various coins with her portrait struck in her honor. These coins were scripted ‘DIVA FAUSTINA’ and were elaborately decorated. He further created a charity which he founded and called it 'Puellae Faustinianae' or 'Girls of Faustina', which assisted orphaned girls. Finally, Antoninus created a new 'alimenta' (see Grain supply to the city of Rome).
Favor with Hadrian.
Having filled the offices of quaestor and praetor with more than usual success, he obtained the consulship in 120. He was next appointed by the Emperor Hadrian as one of the four proconsuls to administer Italia, then greatly increased his reputation by his conduct as proconsul of Asia, probably during 134–135.
He acquired much favor with the Emperor Hadrian, who adopted him as his son and successor on 25 February 138, after the death of his first adopted son Lucius Aelius, on the condition that Antoninus would in turn adopt Marcus Annius Verus, the son of his wife's brother, and Lucius, son of Aelius Verus, who afterwards became the emperors Marcus Aurelius and Lucius Verus.
Emperor.
On his accession, Antoninus' name became 'Imperator Caesar Titus Aelius Hadrianus Antoninus Augustus Pontifex Maximus'. One of his first acts as Emperor was to persuade the Senate to grant divine honours to Hadrian, which they had at first refused; his efforts to persuade the Senate to grant these honours is the most likely reason given for his title of 'Pius' (dutiful in affection; compare 'pietas'). Two other reasons for this title are that he would support his aged father-in-law with his hand at Senate meetings, and that he had saved those men that Hadrian, during his period of ill-health, had condemned to death.
Immediately after Hadrian's death, Antoninus approached Marcus and requested that his marriage arrangements be amended: Marcus' betrothal to Ceionia Fabia would be annulled, and he would be betrothed to Faustina, Antoninus' daughter, instead. Faustina's betrothal to Ceionia's brother Lucius Commodus would also have to be annulled. Marcus consented to Antoninus' proposal.
Antoninus built temples, theaters, and mausoleums, promoted the arts and sciences, and bestowed honours and financial rewards upon the teachers of rhetoric and philosophy. Antoninus made few initial changes when he became emperor, leaving intact as far as possible the arrangements instituted by Hadrian.
There are no records of any military related acts in his time in which he participated. One modern scholar has written 'It is almost certain not only that at no time in his life did he ever see, let alone command, a Roman army, but that, throughout the twenty-three years of his reign, he never went within five hundred miles of a legion'.
His reign was the most peaceful in the entire history of the Principate; while there were several military disturbances throughout the Empire in his time, in Mauretania, Iudaea, and amongst the Brigantes in Britannia, none of them are considered serious. It was however in Britain that Antoninus decided to follow a new, more aggressive path, with the appointment of a new governor in 139, Quintus Lollius Urbicus.
Under instructions from the emperor, he undertook an invasion of southern Scotland, winning some significant victories, and constructing the Antonine Wall from the Firth of Forth to the Firth of Clyde, although it was soon abandoned for reasons that are still not quite clear. There were also some troubles in Dacia Inferior which required the granting of additional powers to the procurator governor and the dispatchment of additional soldiers to the province. Also during his reign the governor of Upper Germany, probably Caius Popillius Carus Pedo, built new fortifications in the Agri Decumates, advancing the Limes Germanicus fifteen miles forward in his province and neighboring Raetia.
Nevertheless, Antoninus was virtually unique among emperors in that he dealt with these crises without leaving Italy once during his reign, but instead dealt with provincial matters of war and peace through their governors or through imperial letters to the cities such as Ephesus (of which some were publicly displayed). This style of government was highly praised by his contemporaries and by later generations.
Legal reforms.
Of the public transactions of this period there is only the scantiest of information, but, to judge by what is extant, those twenty-two years were not remarkably eventful in comparison to those before and after his reign. However, he did take a great interest in the revision and practice of the law throughout the empire. Although he was not an innovator, he would not follow the absolute letter of the law; rather he was driven by concerns over humanity and equality, and introduced into Roman law many important new principles based upon this notion.
In this, the emperor was assisted by five chief lawyers: L. Fulvius Aburnius Valens, an author of legal treatises; L. Volusius Maecianus, chosen to conduct the legal studies of Marcus Aurelius, and author of a large work on Fidei Commissa (Testamentary Trusts); L. Ulpius Marcellus, a prolific writer; and two others . His reign saw the appearance of the 'Institutes of Gaius', an elementary legal manual for beginners (see Gaius (jurist)).
Antoninus passed measures to facilitate the enfranchisement of slaves. In criminal law, Antoninus introduced the important principle that accused persons are not to be treated as guilty before trial. He also asserted the principle, that the trial was to be held, and the punishment inflicted, in the place where the crime had been committed. He mitigated the use of torture in examining slaves by certain limitations. Thus he prohibited the application of torture to children under fourteen years, though this rule had exceptions.
One highlight during his reign occurred in 148, with the nine-hundredth anniversary of the foundation of Rome being celebrated by the hosting of magnificent games in Rome. It lasted a number of days, and a host of exotic animals were killed, including elephants, giraffes, tigers, rhinoceroses, crocodiles and hippopotami. While this increased Antoninus’s popularity, the frugal emperor had to debase the Roman currency. He decreased the silver purity of the denarius from 89% to 83.5% — the actual silver weight dropping from 2.88 grams to 2.68 grams.
Scholars place Antoninus Pius as the leading candidate for fulfilling the role as a friend of Rabbi Judah the Prince. According to the Talmud (Avodah Zarah 10a-b), Rabbi Judah was very wealthy and greatly revered in Rome. He had a close friendship with 'Antoninus', possibly Antoninus Pius, who would consult Rabbi Judah on various worldly and spiritual matters.
Death.
In 156, Antoninus Pius turned 70. He found it difficult to keep himself upright without stays. He started nibbling on dry bread to give him the strength to stay awake through his morning receptions. As Antoninus aged, Marcus would take on more administrative duties, more still when he became the praetorian prefect (an office that was as much secretarial as military) Gavius Maximus died in 156 or 157. In 160, Marcus and Lucius were designated joint consuls for the following year. Perhaps Antoninus was already ill; in any case, he died before the year was out.
Two days before his death, the biographer reports, Antoninus was at his ancestral estate at Lorium, in Etruria, about twelve miles (19 km) from Rome. He ate Alpine cheese at dinner quite greedily. In the night he vomited; he had a fever the next day. The day after that, 7 March 161, he summoned the imperial council, and passed the state and his daughter to Marcus. The emperor gave the keynote to his life in the last word that he uttered when the tribune of the night-watch came to ask the password—'aequanimitas' (equanimity). He then turned over, as if going to sleep, and died. His death closed out the longest reign since Augustus (surpassing Tiberius by a couple of months).
Antoninus Pius' funeral ceremonies were, in the words of the biographer, 'elaborate'. If his funeral followed the pattern of past funerals, his body would have been incinerated on a pyre at the Campus Martius, while his spirit would rise to the gods' home in the heavens. Marcus and Lucius nominated their father for deification. In contrast to their behavior during Antoninus' campaign to deify Hadrian, the senate did not oppose the emperors' wishes. A 'flamen', or cultic priest, was appointed to minister the cult of the deified Antoninus, now 'Divus Antoninus'.
Antoninus Pius' remains were laid to rest in Hadrian's mausoleum, a column was dedicated to him on the Campus Martius, and the temple he had built in the Forum in 141 to his deified wife Faustina was rededicated to the deified Faustina and the deified Antoninus. It survives as the church of San Lorenzo in Miranda.
Historiography.
The only account of his life handed down to us is that of the 'Augustan History', an unreliable and mostly fabricated work. Nevertheless, it still contains information that is considered reasonably sound – for instance, it is the only source that mentions the erection of the Antonine Wall in Britain. Antoninus is unique among Roman emperors in that he has no other biographies. Historians have therefore turned to public records for what details we know.
In later scholarship.
Antoninus in many ways was the ideal of the landed gentleman praised not only by ancient Romans, but also by later scholars of classical history, such as Edward Gibbon or the author of the article on Antoninus Pius in the ninth edition of the Encyclopædia Britannica:
Later historians had a more nuanced view of his reign. According to the historian J. B. Bury,
Inevitably, the surviving evidence is not complete enough to determine whether one should interpret, with older scholars, that he wisely curtailed the activities of the Roman Empire to a careful minimum, or perhaps that he was uninterested in events away from Rome and Italy and his inaction contributed to the pressing troubles that faced not only Marcus Aurelius but also the emperors of the third century. German historian Ernst Kornemann has had it in his 'Römische Geschichte' [2 vols., ed. by H. Bengtson, Stuttgart 1954] that the reign of Antoninus comprised 'a succession of grossly wasted opportunities,' given the upheavals that were to come. There is more to this argument, given that the Parthians in the East were themselves soon to make no small amount of mischief after Antoninus' passing. Kornemann's brief is that Antoninus might have waged preventive wars to head off these outsiders.
Descendants.
Although only one of his four children survived to adulthood, Antoninus came to be ancestor to generations of prominent Roman statesmen and socialites, including at least one empress consort and as the maternal grandfather of the Emperor Commodus. The family of Antoninus Pius and Faustina the Elder also represents one of the few periods in ancient Roman history where the position of Emperor passed smoothly from father to son. Direct descendants of Antoninus and Faustina were confirmed to exist at least into the fifth century AD.
External links.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1259'>
August 3
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1260'>
Advanced Encryption Standard
The Advanced Encryption Standard (AES) is a specification for the encryption of electronic data established by the U.S. National Institute of Standards and Technology (NIST) in 2001.
AES is based on the Rijndael cipher developed by two Belgian cryptographers, Joan Daemen and Vincent Rijmen, who submitted a proposal to NIST during the AES selection process. Rijndael is a family of ciphers with different key and block sizes.
For AES, NIST selected three members of the Rijndael family, each with a block size of 128 bits, but three different key lengths: 128, 192 and 256 bits.
AES has been adopted by the U.S. government and is now used worldwide. It supersedes the Data Encryption Standard (DES), which was published in 1977. The algorithm described by AES is a symmetric-key algorithm, meaning the same key is used for both encrypting and decrypting the data.
In the United States, AES was announced by the NIST as U.S. FIPS PUB 197 (FIPS 197) on November 26, 2001. This announcement followed a five-year standardization process in which fifteen competing designs were presented and evaluated, before the Rijndael cipher was selected as the most suitable (see Advanced Encryption Standard process for more details).
AES became effective as a federal government standard on May 26, 2002 after approval by the Secretary of Commerce. AES is included in the ISO/IEC 18033-3 standard. AES is available in many different encryption packages, and is the first publicly accessible and open cipher approved by the National Security Agency (NSA) for top secret information when used in an NSA approved cryptographic module (see Security of AES, below).
The name 'Rijndael' () is a play on the names of the two inventors (Joan Daemen and Vincent Rijmen).
Definitive standards.
The Advanced Encryption Standard (AES) is defined in each of:
Description of the cipher.
AES is based on a design principle known as a substitution-permutation network, combination of both substitution and permutation, and is fast in both software and hardware. Unlike its predecessor DES, AES does not use a Feistel network. AES is a variant of Rijndael which has a fixed block size of 128 bits, and a key size of 128, 192, or 256 bits. By contrast, the Rijndael specification 'per se' is specified with block and key sizes that may be any multiple of 32 bits, both with a minimum of 128 and a maximum of 256 bits.
AES operates on a 4×4 column-major order matrix of bytes, termed the 'state', although some versions of Rijndael have a larger block size and have additional columns in the state. Most AES calculations are done in a special finite field.
The key size used for an AES cipher specifies the number of repetitions of transformation rounds that convert the input, called the plaintext, into the final output, called the ciphertext. The number of cycles of repetition are as follows:
Each round consists of several processing steps, each containing four similar but different stages, including one that depends on the encryption key itself. A set of reverse rounds are applied to transform ciphertext back into the original plaintext using the same encryption key.
The SubBytes step.
In the SubBytes step, each byte formula_1 in the 'state' matrix is replaced with a SubByte formula_2 using an 8-bit substitution box, the Rijndael S-box. This operation provides the non-linearity in the cipher. The S-box used is derived from the multiplicative inverse over GF('28'), known to have good non-linearity properties. To avoid attacks based on simple algebraic properties, the S-box is constructed by combining the inverse function with an invertible affine transformation. The S-box is also chosen to avoid any fixed points (and so is a derangement), i.e., formula_3, and also any opposite fixed points, i.e., formula_4.
While performing the decryption, Inverse SubBytes step is used, which requires first taking the affine transformation and then finding the multiplicative inverse (just reversing the steps used in SubBytes step).
The ShiftRows step.
The ShiftRows step operates on the rows of the state; it cyclically shifts the bytes in each row by a certain offset. For AES, the first row is left unchanged. Each byte of the second row is shifted one to the left. Similarly, the third and fourth rows are shifted by offsets of two and three respectively. For blocks of sizes 128 bits and 192 bits, the shifting pattern is the same. Row n is shifted left circular by n-1 bytes. In this way, each column of the output state of the ShiftRows step is composed of bytes from each column of the input state. (Rijndael variants with a larger block size have slightly different offsets). For a 256-bit block, the first row is unchanged and the shifting for the second, third and fourth row is 1 byte, 3 bytes and 4 bytes respectively—this change only applies for the Rijndael cipher when used with a 256-bit block, as AES does not use 256-bit blocks. The importance of this step is to avoid the columns being linearly independent, in which case, AES degenerates into four independent block ciphers.
The MixColumns step.
In the MixColumns step, the four bytes of each column of the state are combined using an invertible linear transformation. The MixColumns function takes four bytes as input and outputs four bytes, where each input byte affects all four output bytes. Together with ShiftRows, MixColumns provides diffusion in the cipher.
During this operation, each column is multiplied by a fixed matrix:
Matrix multiplication is composed of multiplication and addition of the entries, and here the multiplication operation can be defined as this: multiplication by 1 means no change, multiplication by 2 means shifting to the left, and multiplication by 3 means shifting to the left and then performing XOR with the initial unshifted value. After shifting, a conditional XOR with 0x1B should be performed if the shifted value is larger than 0xFF. (These are special cases of the usual multiplication in GF('28').) Addition is simply XOR.
In more general sense, each column is treated as a polynomial over GF('28') and is then multiplied modulo x4+1 with a fixed polynomial c(x) = 0x03 · x3 + x2 + x + 0x02. The coefficients are displayed in their hexadecimal equivalent of the binary representation of bit polynomials from GF(2)[x]. The MixColumns step can also be viewed as a multiplication by the shown particular MDS matrix in the finite field GF('28'). This process is described further in the article Rijndael mix columns.
The AddRoundKey step.
In the AddRoundKey step, the subkey is combined with the state. For each round, a subkey is derived from the main key using Rijndael's key schedule; each subkey is the same size as the state. The subkey is added by combining each byte of the state with the corresponding byte of the subkey using bitwise XOR.
Optimization of the cipher.
On systems with 32-bit or larger words, it is possible to speed up execution of this cipher by combining the SubBytes and ShiftRows steps with the MixColumns step by transforming them into a sequence of table lookups. This requires four 256-entry 32-bit tables, and utilizes a total of four kilobytes (4096 bytes) of memory — one kilobyte for each table. A round can then be done with 16 table lookups and 12 32-bit exclusive-or operations, followed by four 32-bit exclusive-or operations in the AddRoundKey step.
If the resulting four-kilobyte table size is too large for a given target platform, the table lookup operation can be performed with a single 256-entry 32-bit (i.e. 1 kilobyte) table by the use of circular rotates.
Using a byte-oriented approach, it is possible to combine the SubBytes, ShiftRows, and MixColumns steps into a single round operation.
Security.
Until May 2009, the only successful published attacks against the full AES were side-channel attacks on some specific implementations. The National Security Agency (NSA) reviewed all the AES finalists, including Rijndael, and stated that all of them were secure enough for U.S. Government non-classified data. In June 2003, the U.S. Government announced that AES could be used to protect classified information:
The design and strength of all key lengths of the AES algorithm (i.e., 128, 192 and 256) are sufficient to protect classified information up to the SECRET level. TOP SECRET information will require use of either the 192 or 256 key lengths. The implementation of AES in products intended to protect national security systems and/or information must be reviewed and certified by NSA prior to their acquisition and use.
AES has 10 rounds for 128-bit keys, 12 rounds for 192-bit keys, and 14 rounds for 256-bit keys. By 2006, the best known attacks were on 7 rounds for 128-bit keys, 8 rounds for 192-bit keys, and 9 rounds for 256-bit keys.
Known attacks.
For cryptographers, a cryptographic 'break' is anything faster than a brute force—performing one trial decryption for each key (see Cryptanalysis). This includes results that are infeasible with current technology. The largest successful publicly known brute force attack against any block-cipher encryption was against a 64-bit RC5 key by distributed.net in 2006.
AES has a fairly simple algebraic description. In 2002, a theoretical attack, termed the 'XSL attack', was announced by Nicolas Courtois and Josef Pieprzyk, purporting to show a weakness in the AES algorithm due to its simple description. Since then, other papers have shown that the attack as originally presented is unworkable; see XSL attack on block ciphers.
During the AES process, developers of competing algorithms wrote of Rijndael, '..we are concerned about [its] use..in security-critical applications.' However, in October 2000 at the end of the AES selection process, Bruce Schneier, a developer of the competing algorithm Twofish, wrote that while he thought successful academic attacks on Rijndael would be developed someday, he does not 'believe that anyone will ever discover an attack that will allow someone to read Rijndael traffic.'
On July 1, 2009, Bruce Schneier blogged
about a related-key attack on the 192-bit and 256-bit versions of AES, discovered by Alex Biryukov and Dmitry Khovratovich,
which exploits AES's somewhat simple key schedule and has a complexity of 2119. In December 2009 it was improved to 299.5. This is a follow-up to an attack discovered earlier in 2009 by Alex Biryukov, Dmitry Khovratovich, and Ivica Nikolić, with a complexity of 296 for one out of every 235 keys.
Another attack was blogged by Bruce Schneier
on July 30, 2009 and released as a preprint
on August 3, 2009. This new attack, by Alex Biryukov, Orr Dunkelman, Nathan Keller, Dmitry Khovratovich, and Adi Shamir, is against AES-256 that uses only two related keys and 239 time to recover the complete 256-bit key of a 9-round version, or 245 time for a 10-round version with a stronger type of related subkey attack, or 270 time for an 11-round version. 256-bit AES uses 14 rounds, so these attacks aren't effective against full AES.
In November 2009, the first known-key distinguishing attack against a reduced 8-round version of AES-128 was released as a preprint.
This known-key distinguishing attack is an improvement of the rebound or the start-from-the-middle attacks for AES-like permutations, which view two consecutive rounds of permutation as the application of a so-called Super-Sbox. It works on the 8-round version of AES-128, with a time complexity of 248, and a memory complexity of 232.
In July 2010 Vincent Rijmen published an ironic paper on 'chosen-key-relations-in-the-middle' attacks on AES-128.
The first key-recovery attacks on full AES were due to Andrey Bogdanov, Dmitry Khovratovich, and Christian Rechberger, and were published in 2011. The attack is a biclique attack and is faster than brute force by a factor of about four. It requires 2126.1 operations to recover an AES-128 key. For AES-192 and AES-256, 2189.7 and 2254.4 operations are needed, respectively.
Side-channel attacks.
Side-channel attacks do not attack the underlying cipher, and thus are not related to security in that context. They rather attack implementations of the cipher on systems which inadvertently leak data. There are several such known attacks on certain implementations of AES.
In April 2005, D.J. Bernstein announced a cache-timing attack that he used to break a custom server that used OpenSSL's AES encryption. The attack required over 200 million chosen plaintexts. The custom server was designed to give out as much timing information as possible (the server reports back the number of machine cycles taken by the encryption operation); however, as Bernstein pointed out, 'reducing the precision of the server's timestamps, or eliminating them from the server's responses, does not stop the attack: the client simply uses round-trip timings based on its local clock, and compensates for the increased noise by averaging over a larger number of samples.'
In October 2005, Dag Arne Osvik, Adi Shamir and Eran Tromer presented a paper demonstrating several cache-timing attacks against AES. One attack was able to obtain an entire AES key after only 800 operations triggering encryptions, in a total of 65 milliseconds. This attack requires the attacker to be able to run programs on the same system or platform that is performing AES.
In December 2009 an attack on some hardware implementations was published that used differential fault analysis and allows recovery of a key with a complexity of 232.
In November 2010 Endre Bangerter, David Gullasch and Stephan Krenn published a paper which described a practical approach to a 'near real time' recovery of secret keys from AES-128 without the need for either cipher text or plaintext. The approach also works on AES-128 implementations that use compression tables, such as OpenSSL. Like some earlier attacks this one requires the ability to run unprivileged code on the system performing the AES encryption, which may be achieved by malware infection far more easily than commandeering the root account.
NIST/CSEC validation.
The Cryptographic Module Validation Program (CMVP) is operated jointly by the United States Government's National Institute of Standards and Technology (NIST) Computer Security Division and the Communications Security Establishment (CSE) of the Government of Canada. The use of cryptographic modules validated to NIST FIPS 140-2 is required by the United States Government for encryption of all data that has a classification of Sensitive but Unclassified (SBU) or above. From NSTISSP #11, National Policy Governing the Acquisition of Information Assurance: 'Encryption products for protecting classified information will be certified by NSA, and encryption products intended for protecting sensitive information will be certified in accordance with NIST FIPS 140-2.'
The Government of Canada also recommends the use of FIPS 140 validated cryptographic modules in unclassified applications of its departments.
Although NIST publication 197 ('FIPS 197') is the unique document that covers the AES algorithm, vendors typically approach the CMVP under FIPS 140 and ask to have several algorithms (such as Triple DES or SHA1) validated at the same time. Therefore, it is rare to find cryptographic modules that are uniquely FIPS 197 validated and NIST itself does not generally take the time to list FIPS 197 validated modules separately on its public web site. Instead, FIPS 197 validation is typically just listed as an 'FIPS approved: AES' notation (with a specific FIPS 197 certificate number) in the current list of FIPS 140 validated cryptographic modules.
The Cryptographic Algorithm Validation Program (CAVP) allows for independent validation of the correct implementation of the AES algorithm at a reasonable cost. Successful validation results in being listed on the . This testing is a pre-requisite for the FIPS 140-2 module validation described below. However, successful CAVP validation in no way implies that the cryptographic module implementing the algorithm is secure. A cryptographic module lacking FIPS 140-2 validation or specific approval by the NSA is not deemed secure by the US Government and cannot be used to protect government data.
FIPS 140-2 validation is challenging to achieve both technically and fiscally. There is a standardized battery of tests as well as an element of source code review that must be passed over a period of a few weeks. The cost to perform these tests through an approved laboratory can be significant (e.g., well over $30,000 US) and does not include the time it takes to write, test, document and prepare a module for validation. After validation, modules must be re-submitted and re-evaluated if they are changed in any way. This can vary from simple paperwork updates if the security functionality did not change to a more substantial set of re-testing if the security functionality was impacted by the change.
Test vectors.
Test vectors are a set of known ciphers for a given input and key. NIST distributes the reference of AES test vectors as .
Performance.
High speed and low RAM requirements were criteria of the AES selection process. Thus AES performs well on a wide variety of hardware, from 8-bit smart cards to high-performance computers.
On a Pentium Pro, AES encryption requires 18 clock cycles per byte, equivalent to a throughput of about 11 MB/s for a 200 MHz processor. On a 1.7 GHz Pentium M throughput is about 60 MB/s.
On Intel Core i3/i5/i7 and AMD APU and FX CPUs supporting AES-NI instruction set extensions, throughput can be over 700 MB/s per thread.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1261'>
April 26
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1262'>
Argot
An argot (; from French 'argot' ‘slang’) is a secret language used by various groups — e.g. schoolmates, outlaws, colleagues, among many others — to prevent outsiders from understanding their conversations. The term 'argot' is also used to refer to the informal specialized vocabulary from a particular field of study, occupation, or hobby, in which sense it overlaps with jargon.
The author Victor Hugo was one of the first to research argot extensively. He describes it in his 1862 novel 'Les Misérables' as the language of the dark; at one point, he says, 'What is argot; properly speaking? Argot is the language of misery.'
The earliest known record of the term 'argot' in this context was in a 1628 document. The word was probably derived from the contemporary name, 'les argotiers', given to a group of thieves at that time.
Under the strictest definition, an 'argot' is a proper language, with its own grammar and style. But such complete secret languages are rare, because the speakers usually have some public language in common, on which the argot is largely based. Such argots are mainly versions of another language, with a part of its vocabulary replaced by words unknown to the larger public; 'argot' used in this sense is synonymous with 'cant'. For example, 'argot' in this sense is used for systems such as 'verlan' and 'louchébem', which retain French syntax and apply transformations only to individual words (and often only to a certain subset of words, such as nouns, or semantic content words). Such systems are examples of 'argots à clef', or 'coded argots.'
Specific words can go from argot into common speech or the other way. For example, modern French 'loufoque' ‘crazy, goofy’, now common usage, originates in the louchébem transformation of Fr. 'fou' ‘crazy’.
Trivia.
'Piaf' is, until today, a Parisian argot word for “bird, sparrow”; it was taken up by the singer Edith Piaf as her stage name.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1264'>
Anisotropy
Anisotropy is the property of being directionally dependent, as opposed to isotropy, which implies identical properties in all directions. It can be defined as a difference, when measured along different axes, in a material's physical or mechanical properties (absorbance, refractive index, conductivity, tensile strength, etc.) An example of anisotropy is the light coming through a polarizer. Another is wood, which is easier to split along its grain than against it.
Fields of interest.
Computer graphics.
In the field of computer graphics, an anisotropic surface changes in appearance as it rotates about its geometric normal, as is the case with velvet.
Anisotropic filtering (AF) is a method of enhancing the image quality of textures on surfaces that are far away and steeply angled with respect to the point of view. Older techniques, such as bilinear and trilinear filtering, do not take into account the angle a surface is viewed from, which can result in aliasing or blurring of textures. By reducing detail in one direction more than another, these effects can be reduced.
Chemistry.
A chemical anisotropic filter, as used to filter particles, is a filter with increasingly smaller interstitial spaces in the direction of filtration so that the proximal regions filter out larger particles and distal regions increasingly remove smaller particles, resulting in greater flow-through and more efficient filtration.
In NMR spectroscopy, the orientation of nuclei with respect to the applied magnetic field determines their chemical shift. In this context, anisotropic systems refer to the electron distribution of molecules with abnormally high electron density, like the pi system of benzene. This abnormal electron density affects the applied magnetic field and causes the observed chemical shift to change.
In fluorescence spectroscopy, the fluorescence anisotropy, calculated from the polarization properties of fluorescence from samples excited with plane-polarized light, is used, e.g., to determine the shape of a macromolecule.
Anisotropy measurements reveal the average angular displacement of the fluorophore that occurs between absorption and subsequent emission of a photon.
Real-world imagery.
Images of a gravity-bound or man-made environment are particularly anisotropic in the orientation domain, with more image structure located at orientations parallel with or orthogonal to the direction of gravity (vertical and horizontal).
Physics.
Physicists from University of California, Berkeley reported about their detection of the cosine anisotropy in cosmic microwave background radiation in 1977. Their experiment demonstrated the Doppler shift caused by the movement of the earth with respect to the early Universe matter, the source of the radiation. Cosmic anisotropy has also been seen in the alignment of galaxies' rotation axes and polarisation angles of quasars.
Physicists use the term anisotropy to describe direction-dependent properties of materials. Magnetic anisotropy, for example, may occur in a plasma, so that its magnetic field is oriented in a preferred direction. Plasmas may also show 'filamentation' (such as that seen in lightning or a plasma globe) that is directional.
An 'anisotropic liquid' has the fluidity of a normal liquid, but has an average structural order relative to each other along the molecular axis, unlike water or chloroform, which contain no structural ordering of the molecules. Liquid crystals are examples of anisotropic liquids.
Some materials conduct heat in a way that is isotropic, that is independent of spatial orientation around the heat source. Heat conduction is more commonly anisotropic, which implies that detailed geometric modeling of typically diverse materials being thermally managed is required. The materials used to transfer and reject heat from the heat source in electronics are often anisotropic.
Many crystals are anisotropic to light ('optical anisotropy'), and exhibit properties such as birefringence. Crystal optics describes light propagation in these media. An 'axis of anisotropy' is defined as the axis along which isotropy is broken (or an axis of symmetry, such as normal to crystalline layers). Some materials can have multiple such optical axes.
Geology and Geophysics.
Seismic anisotropy is the variation of seismic wavespeed with direction. Seismic anisotropy is an indicator of long range order in a material, where features smaller than the seismic wavelength (e.g., crystals, cracks, pores, layers or inclusions) have a dominant alignment. This alignment leads to a directional variation of elasticity wavespeed. Measuring the effects of anisotropy in seismic data can provide important information about processes and mineralogy in the Earth; indeed, significant seismic anisotropy has been detected in the Earth's crust, mantle and inner core.
Geological formations with distinct layers of sedimentary material can exhibit electrical anisotropy; electrical conductivity in one direction (e.g. parallel to a layer), is different from that in another (e.g. perpendicular to a layer). This property is used in the gas and oil exploration industry to identify hydrocarbon-bearing sands in sequences of sand and shale. Sand-bearing hydrocarbon assets have high resistivity (low conductivity), whereas shales have lower resistivity. Formation evaluation instruments measure this conductivity/resistivity and the results are used to help find oil and gas in wells.
The hydraulic conductivity of aquifers is often anisotropic for the same reason. When calculating groundwater flow to drains or to wells, the difference between horizontal and vertical permeability must be taken into account, otherwise the results may be subject to error.
Most common rock-forming minerals are anisotropic, including quartz and feldspar. Anisotropy in minerals is most reliably seen in their optical properties. An example of an isotropic mineral is garnet.
Medical acoustics.
Anisotropy is also a well-known property in medical ultrasound imaging describing a different resulting echogenicity of soft tissues, such as tendons, when the angle of the transducer is changed. Tendon fibers appear hyperechoic (bright) when the transducer is perpendicular to the tendon, but can appear hypoechoic (darker) when the transducer is angled obliquely. This can be a source of interpretation error for inexperienced practitioners.
Material science and engineering.
Anisotropy, in Material Science, is a material’s directional dependence of a physical property. Most materials exhibit anisotropic behavior. An example would be the dependence of Young's modulus on the direction of load.
Anisotropy in polycrystalline materials can also be due to certain texture patterns often produced during manufacturing of the material. In the case of rolling, 'stringers' of texture are produced in the direction of rolling, which can lead to vastly different properties in the rolling and transverse directions.
Some materials, such as wood and fibre-reinforced composites are very anisotropic, being much stronger along the grain/fibre than across it. Metals and alloys tend to be more isotropic, though they can sometimes exhibit significant anisotropic behaviour. This is especially important in processes such as deep-drawing.
Wood is a naturally anisotropic (transversely isotropic) material. Its properties vary widely when measured with or against the growth grain. For example, wood's strength and hardness is different for the same sample measured in different orientations.
Microfabrication.
Anisotropic etching techniques (such as deep reactive ion etching) are used in microfabrication processes to create well defined microscopic features with a high aspect ratio. These features are commonly used in MEMS and microfluidic devices, where the anisotropy of the features is needed to impart desired optical, electrical, or physical properties to the device. Anisotropic etching can also refer to certain chemical etchants used to etch a certain material preferentially over certain crystallographic planes (e.g., KOH etching of silicon [100] produces pyramid-like structures)
Neuroscience.
Diffusion tensor imaging is an MRI technique that involves measuring the fractional anisotropy of the random motion (Brownian motion) of water molecules in the brain. Water molecules located in fiber tracts are more likely to be anisotropic, since they are restricted in their movement (they move more in the dimension parallel to the fiber tract rather than in the two dimensions orthogonal to it), whereas water molecules dispersed in the rest of the brain have less restricted movement and therefore display more isotropy. This difference in fractional anisotropy is exploited to create a map of the fiber tracts in the brains of the individual.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1267'>
Alpha decay
Alpha decay, or α-decay, is a type of radioactive decay in which an atomic nucleus emits an alpha particle and thereby transforms (or 'decays') into an atom with a mass number 4 less and atomic number 2 less. For example, uranium-238 decaying through α-particle emission to form thorium-234 can be expressed as:
Because an alpha particle is the same as the nucleus of a helium-4 atom - consisting of two protons and two neutrons and thus having mass number 4 and atomic number 2 - this can also be written as:
Notice how, on either side of the nuclear equation, both the mass number and the atomic number are conserved: the mass number is 238 on the left side and (234 + 4) on the right side, and the atomic number is 92 on the left side and (90 + 2) on the right side.
The alpha particle also has a charge +2, but the charge is usually not written in nuclear equations, which describe nuclear reactions without considering the electrons. This convention is not meant to imply that the nuclei necessarily occur in neutral atoms. Alpha decay typically occurs in the heaviest nuclides. In theory it can occur only in nuclei somewhat heavier than nickel (element 28), where overall binding energy per nucleon is no longer a minimum, and the nuclides are therefore unstable toward spontaneous fission-type processes. In practice, this mode of decay has only been observed in nuclides considerably heavier than nickel, with the lightest known alpha emitter being the lightest isotopes (mass numbers 106–110) of tellurium (element 52).
Alpha decay is by far the most common form of cluster decay where the parent atom ejects a defined daughter collection of nucleons, leaving another defined product behind (in nuclear fission, a number of different pairs of daughters of approximately equal size are formed). Alpha decay is the most likely cluster decay because of the combined extremely high binding energy and relatively small mass of the helium-4 product nucleus (the alpha particle). Alpha decay, like other cluster decays, is fundamentally a quantum tunneling process. Unlike beta decay, alpha decay is governed by the interplay between the nuclear force and the electromagnetic force.
Alpha particles have a typical kinetic energy of 5 MeV (that is, ≈ 0.13% of their total energy, i.e. 110 TJ/kg) and a speed of 15,000 km/s. This corresponds to a speed of around 0.05 'c'. There is surprisingly small variation around this energy, due to the heavy dependence of the half-life of this process on the energy produced (see equations in the Geiger–Nuttall law). Because of their relatively large mass, +2 electric charge and relatively low velocity, alpha particles are very likely to interact with other atoms and lose their energy, so their forward motion is effectively stopped within a few centimeters of air. Most of the helium produced on Earth (approximately 99% of it) is the result of the alpha decay of underground deposits of minerals containing uranium or thorium. The helium is brought to the surface as a byproduct of natural gas production.
History.
Alpha particles were first described in the investigations of radioactivity by Ernest Rutherford in 1899, and by 1907 they were identified as He2+ ions. For more details of this early work, see Alpha particle#History of discovery and use.
By 1928, George Gamow had solved the theory of the alpha decay via tunneling. The alpha particle is trapped in a potential well by the nucleus. Classically, it is forbidden to escape, but according to the (then) newly discovered principles of quantum mechanics, it has a tiny (but non-zero) probability of 'tunneling' through the barrier and appearing on the other side to escape the nucleus. Gamow solved a model potential for the nucleus and derived, from first principles, a relationship between the half-life of the decay, and the energy of the emission, which had been previously discovered empirically, and was known as the Geiger–Nuttall law.
Uses.
Americium-241, an alpha emitter, is used in smoke detectors. The alpha particles ionize air in an open ion chamber and a small current flows through the ionized air. Smoke particles from fire that enter the chamber reduce the current, triggering the smoke detector's alarm. 'See Smoke_Detector-Ionization for details'.
Alpha decay can provide a safe power source for radioisotope thermoelectric generators used for space probes and artificial heart pacemakers. Alpha decay is much more easily shielded against than other forms of radioactive decay. Plutonium-238, for example, requires only 2.5 millimetres of lead shielding to protect against unwanted radiation.
Static eliminators typically use polonium-210, an alpha emitter, to ionize air, allowing the 'static cling' to more rapidly dissipate.
Toxicity.
Being relatively heavy and positively charged, alpha particles tend to have a very short mean free path, and quickly lose kinetic energy within a short distance of their source. This results in several MeV being deposited in a relatively small volume of material. This increases the chance of cellular damage in cases of internal contamination. In general, external alpha radiation is not harmful since alpha particles are effectively shielded by a few centimeters of air, a piece of paper, or the thin layer of dead skin cells that make up the epidermis. Even touching an alpha source is typically not harmful, though many alpha sources also are accompanied by beta-emitting radio daughters, and alpha emission is also accompanied by gamma photon emission. If substances emitting alpha particles are ingested, inhaled, injected or introduced through the skin, then it could result in a measurable dose.
The relative biological effectiveness (RBE) of alpha radiation is higher than that of beta or gamma radiation. RBE quantifies the ability of radiation to cause certain biological effects, notably either cancer or cell-death, for equivalent radiation exposure. The higher value for alpha radiation is generally attributable to the high linear energy transfer (LET) coefficient, which is about one ionization of a chemical bond for every angstrom of travel by the alpha particle. The RBE has been set at the value of 20 for alpha radiation by various government regulations. The RBE is set at 10 for neutron irradiation, and at 1 for beta radiation and ionizing photons.
However, another component of alpha radiation is the recoil of the parent nucleus, termed alpha recoil. Due to the conservation of momentum requiring the parent nucleus to recoil, the effect acts much like the 'kick' of a rifle butt when a bullet goes in the opposite direction. This gives a significant amount of energy to the recoiling nucleus, which also causes ionization damage (see ionizing radiation). The total energy of the recoil nucleus is readily calculable, and is roughly the weight of the alpha (4 u) divided by the weight of the parent (typically about 200 u) times the total energy of the alpha. By some estimates, this might account for most of the internal radiation damage, as the recoil nuclei are typically heavy metals which preferentially collect on the chromosomes. In some studies, this has resulted in a RBE approaching 1,000 instead of the value used in governmental regulations.
The largest natural contributor to public radiation dose is radon, a naturally occurring, radioactive gas found in soil and rock. If the gas is inhaled, some of the radon particles may attach to the inner lining of the lung. These particles continue to decay, emitting alpha particles which can damage cells in the lung tissue. The death of Marie Curie at age 66 from leukemia was probably caused by prolonged exposure to high doses of ionizing radiation, but it is not clear if this was due to alpha radiation or X-rays. Curie worked extensively with radium, which decays into radon, along with other radioactive materials that emit beta and gamma rays. However, Curie also worked with unshielded X-ray tubes during World War I, and analysis of her skeleton during a reburial showed a relatively low level of radioisotope burden.
Russian dissident Alexander Litvinenko's 2006 murder by radiation poisoning is thought to have been carried out with polonium-210, an alpha emitter.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1270'>
Extreme poverty
Extreme poverty, or absolute poverty, was originally defined by the United Nations in 1995 as “a condition characterized by severe deprivation of basic human needs, including food, safe drinking water, sanitation facilities, health, shelter, education and information. It depends not only on income but also on access to services.” Currently, extreme poverty widely refers to earning below the international poverty line of a $1.25/day (in 2005 prices), set by the World Bank. This measure is the equivalent to earning a $1.00 a day in 1996 US prices, hence the widely used expression, living on “less than a dollar a day.” The vast majority of those in extreme poverty – 96% – reside in South Asia, Sub-Saharan Africa, East Asia and the Pacific; nearly half live in India and China alone.
The reduction of extreme poverty and hunger was the first Millennium Development Goal (MDG1), as set by 189 United Nations Member States in 2000. Specifically, MDG1 set a target of reducing the extreme poverty rate in half by 2015, a goal that was met 5 years ahead of schedule. With the expiration of the MDGs fast approaching, the international community, including the UN, the World Bank and the US, has set a target of ending extreme poverty by 2030.
Defining Extreme Poverty.
Income Based Definition.
Extreme poverty is defined by the International Community as earning less than a $1.25 a day, as measured in 2005 international prices. Originally, the international poverty line was set at earning a $1 a day when the Millennium Development Goals were first published. However, in 2008, the World Bank pushed the line to $1.25 to recognize higher price levels in several developing countries than previously estimated.
As of September 2010 (the most recent, reliable date), according to the UN, roughly 1.2 billion people remain in extreme poverty based on this metric. Despite the significant number of individuals still earning below the international poverty line, this figure represents significant progress for the international community, as this amount is 700 million less than the number living in extreme poverty in 1990 – 1.9 billion. As highlighted in the next section, though there are many criticisms of a purely income-based approach to measuring extreme poverty, the $1.25/day line remains the most widely used metric as it is easily accessible to the public at large and “draws attention to those in the direst need.”
Common Criticism/Alternatives.
Though widely used by most international organizations, the $1.25/day extreme poverty line has come under scrutiny from a variety of actors. For example, when used to measure headcount ratio (i.e. the percentage of people living below the line), the $1.25/day line is unable to capture other important measures such as depth of poverty, relative poverty and how people view their own financial situation (known as the “socially subjective poverty line”). Moreover, the calculation of the poverty line relies on several debatable assumptions about purchasing power parity, homogeneity of household size and makeup, and consumer prices used to determine a basket of essential goods. Not to mention the fact that there may be missing data from the poorest and most fragile countries which may muddle the picture even further.
To address these problems, several alternative instruments for measuring extreme poverty have been suggested which incorporate other factors such as malnutrition and lack of access to a basic education. Thus, the 2010 Human Development Report introduced the Multidimensional Poverty Index (MPI), which measures not only income, but also basic needs. Using this tool, the United Nations Development Programme (UNDP) estimated that roughly 1.75 billion people remained in extreme poverty as opposed to the conventional figure of 1.2 billion. As this figure is considered more “holistic,” it may shed new light on relative deprivation within a country. For example, in Ethiopia, 39% of the population is considered extremely poor under conventional measures, but 90% are in multidimensional poverty.
Another version of the MPI, known as the Alkire-Foster Method, created by Sabina Alkire and James Foster of the Oxford Poverty & Human Development Initiative (OPHI), can be broken down to reflect both the incidence and the intensity of poverty. This tool is useful as development officials, using the “M0 measure” of the method (which is calculated by multiplying “the proportion of people who are poor by the percentage of dimensions in which they are deprived”), can determine the most likely causes of poverty within a region. For example, in the Gasa District of Bhutan, using the M0 measure of the Alkire-Foster method reveals that poverty in the region is primarily caused by a lack of access to electricity and drinking water, in addition to widespread overcrowding. In contrast, data from the Chhukha District of Bhutan reveals that income is a much larger contributor to poverty as opposed to other dimensions within the region.
Current Trends.
Getting to Zero.
Using the World Bank definition of $1.25/day, as of September 2013, roughly 1.2 billion people remain in extreme poverty. Nearly half live in India and China, with more than 85% living in just 20 countries. Since the mid-1990s, there has been a steady decline in both the worldwide poverty rate and the total number of extreme poor. In 1990, the percentage of the global population living in extreme poverty was 43.1%, but in 2010, that percentage had dropped down to 20.6%. This halving of the extreme poverty rate falls in line with the first millennium development goal (MDG1) proposed by former UN Secretary-General Kofi Annan, who called on the international community at the turn of the century to “halv[e] the proportion of people living in extreme poverty…by 2015.”
This reduction in extreme poverty took place most notably in China, Indonesia, India, Pakistan and Vietnam. These five countries accounted for the alleviation of 715 million people out of extreme poverty between 1990 and 2010 – more than the global net total of roughly 700 million. This statistical oddity can be explained by the fact that the number of people living in extreme poverty in Sub-Saharan Africa rose from 290 million to 414 million over the same period. However, there have been many positive signs for extensive, global poverty reduction as well. Since 1999, the total number of extreme poor has declined by 50 million per year, on average. Moreover, in 2005, for the first time in recorded history, poverty rates began to fall in every region of the world, including Africa.
As aforementioned, the number of people living in extreme poverty has reduced from 1.9 billion to 1.2 billion over the span of the last 20–25 years. If we remain on our current trajectory, many economists predict we could reach global “zero” by 2030-2035, thus “ending” extreme poverty. Global zero entails a world in which less than 3% of the global population lives in extreme poverty (projected under most optimistic scenarios to be less than 200 million people). This “zero” figure is set at 3% in recognition of the fact that some amount of “frictional” poverty will continue to exist, whether it is caused by political conflict or unexpected economic fluctuations, at least for the foreseeable future. However, the Brookings Institution notes that any projection about poverty more than a few years into the future runs the risk of being highly uncertain. This is because changes in consumption and distribution throughout the developing world over the next two decades could result in monumental shifts in global poverty, for better or worse.
Others are more pessimistic about this possibility, with many predicting a range of 193 million to 660 million people living in extreme poverty by 2035. Additionally, some believe the rate of poverty reduction will slow down in the developing world, especially in Africa, and as such it will take closer to five decades to reach global “zero.” Despite these reservations, several prominent international and national organizations, including the UN, the World Bank and the United States Federal Government (via USAID), have set a target of reaching global zero by the end of 2030.
Exacerbating Factors.
Extreme poverty does not exist in a vacuum. There are a variety of factors that may reinforce or instigate the existence of extreme poverty, such as weak institutions, cycles of violence and a low level of growth. Recent World Bank research shows that some countries can get caught in a “fragility trap,” in which the above factors prevent the poorest nations from emerging from low-level equilibrium in the long run. Moreover, most of the reduction in extreme poverty over the past twenty years has taken place in countries that have not experienced a civil conflict or have had governing institutions with a strong capacity to actually govern. Thus, to end extreme poverty, it is also important to focus on the interrelated problems of fragility and conflict.
USAID defines fragility as a government’s lack of both legitimacy (the perception the government is adequate at doing its job) and effectiveness (how good the government is at maintaining law and order, in an equitable manner). As fragile nations are unable to equitably and effectively perform the functions of a state, these countries are much more prone to violent unrest and mass inequality. Additionally, in countries with high levels of inequality (a common problem in countries with inadequate governing institutions), much higher growth rates are needed to reduce the rate of poverty when compared with other nations. Not to mention, after removing China and India from the equation, up to 70% of the world’s poor live in fragile states by some definitions of fragility. Looking further, some analysts project extreme poverty will be increasingly concentrated in fragile, low-income states like Haiti, Yemen and the Central African Republic over the coming years. However, some academics, such as Andy Sumner, assert that extreme poverty will be increasingly found concentrated in Middle Income Countries, creating a“poverty paradox” -- as the World’s poor don’t actually live in the poorest countries.
Despite this debate, addressing the problem of fragility remains a very real issue. To help low-income, fragile states make the transition towards peace and prosperity, the New Deal for Engagement in Fragile States, endorsed by roughly forty countries and multilateral institutions, was created in 2011. This “New Deal,” represents an important step towards redressing the problem of fragility as it was originally articulated by self-identified fragile states who called on the international community to not only “do things differently,” but to also “do different things.”
On the other hand, civil conflict also remains a prime cause for the perpetuation of poverty throughout the developing world. Armed conflict can have severe effects on economic growth for a plethora of reasons – it destroys assets, creates unwanted mass migration, destroys livelihoods and diverts public resources towards war fighting. Significantly, a country that experienced major violence during 1981-2005 had extreme poverty rates 21 percentage points higher than a country with no violence. On average, a civil conflict will also cost a country roughly 30 years of GDP growth. Therefore, a renewed commitment from the international community to address the deteriorating situation in highly fragile states is necessary to both prevent the mass loss of life, but to also prevent the vicious cycle of extreme poverty.
International Conferences.
Millennium Summit.
On September 6-8th, 2000, world leaders gathered at the Millennium Summit held in New York, launching the United Nations Millennium Project suggested by then UN Secretary-General Kofi Annan. Prior to the launch of the conference, the office of Secretary-General Annan released a report entitled We The Peoples: The Role of the United Nations in the 21st Century. In this document, now widely known as the Millennium Report, Kofi Annan called on the international community “to adopt the target of halving the proportion of people living in extreme poverty, and so lifting more than 1 billion people out of it, by 2015.” Citing studies that show “an almost perfect correlation between growth and poverty reduction in poor countries,” Annan urged international leaders to indiscriminately target the problem of extreme poverty across every region. In charge of managing the project was Jeffrey Sachs, a noted development economist, who in 2005 released a plan for action called “Investing in Development: A Practical Plan to Achieve the Millennium Development Goals.”
2005 World Summit.
The 2005 World Summit, held on September 14-16th, was organized to measure international progress towards fulfilling the Millennium Development Goals (MDGs). Notably, the conference brought together more than 170 Heads of State. While world leaders at the summit were encouraged by the reduction of poverty in some nations, they were concerned by the uneven decline of poverty within and among different regions of the globe. However, at the end of the summit, the conference attendees reaffirmed the UN’s commitment to achieve the MDGs by 2015 and urged all supranational, national and non-governmental organizations to follow suit.
Post-2015 Development Agenda.
With the expiration of the Millennium Development Goals approaching in 2015, the international community is focused on accelerating efforts to achieve the goals laid out in the original MDGs. Overall, there has been significant progress towards reducing extreme poverty, with the MDG 1 target of reducing extreme poverty rates by half, met “five years ahead of the 2015 deadline…700 million fewer people lived in conditions of extreme poverty in 2010 than in 1990. However, at the global level 1.2 billion people [were] still living in extreme poverty.” One notable exception to this trend was in Sub-Saharan Africa, the only region where the number of people living in extreme poverty rose from 290 million in 1990 to 414 million in 2010, comprising more than a third of those living in extreme poverty worldwide.
With the aforementioned in mind, the UN convened a High Level Panel (HLP) of Eminent Persons, to advise on a post-2015 development framework. The HLP report, entitled A New Global Partnership: Eradicate Poverty and Transform Economies Through Sustainable Development, was published in May 2013. In the report, the HLP wrote that:
Ending extreme poverty is just the beginning, not the end. It is vital, but our vision must be broader: to start countries on the path of sustainable development – building on the foundations established by the 2012 UN Conference on Sustainable Development in Rio de Janeiro12, and meeting a challenge that no country, developed or developing, has met so far. We recommend to the Secretary-General that deliberations on a new development agenda must be guided by the vision of eradicating extreme poverty once and for all, in the context of sustainable development.
Thus, the report determined that a central goal of the Post-Millennium Development agenda is to “eradicate extreme poverty…by 2030.” However, the report also emphasized that the MDGs were not enough, as they did not “focus on the devastating effects of conflict and violence on development…the importance to development of good governance and institution…nor the need for inclusive growth..” Consequently, there now exists synergy between the policy position papers put forward by the United States (through USAID), the World Bank and the UN itself in terms of viewing fragility and a lack of good governance as exacerbating extreme poverty. However, in a departure from the views of other organizations, the commission also proposed that the UN focus not only on extreme poverty (a line drawn at $1.25), but also on a higher target, such as $2. The report notes this change could be made to reflect the fact that escaping extreme poverty is “only a start.”
In addition to the UN, a host of other supranational and national actors such as the European Union and the African Union have published their own positions or recommendations on what should be incorporated in the Post-2015 agenda. The European Commission’s communication, published in A decent Life for all: from vision to collective action, affirmed the UN’s commitment to “eradicate extreme poverty in our lifetime and put the world on a sustainable path to ensure a decent life for all by 2030.” A unique vision of the report was the Commission’s environmental focus (in addition to a plethora of other goals such as combating hunger and gender inequality). Specifically, the Commission argued, “long-term poverty reduction…requires inclusive and sustainable growth. Growth should create decent jobs, take place with resource efficiency and within planetary boundaries, and should support efforts to mitigate climate change.” The African Union’s report, entitled Common African Position (CAP) on the Post-2015 Development Agenda, likewise encouraged the international community to focus on eradicating the twin problems of “poverty and exclusion” in our lifetime. Moreover, the CAP pledged that it would “commit to ensure that no person – regardless of ethnicity, gender, geography, disability, race or other status – is denied universal human rights and basic economic opportunities.”
UN LDC Conferences.
The UN Least Developed Country (LDC) conferences were a series of summits organized by the UN over the past few decades, which sought to promote the substantial and even development of so-called “third-world” countries.
1st UN LDC Conference
Held between September 1 and September 14, 1981, in Paris, the first UN LDC Conference was organized to finalize the UN’s “Substantial New Programme of Action” for the 1980s in Least Developed Countries. This program, which was unanimously adopted by the conference attendees, argued for internal reforms in LDCs (meant to encourage economic growth) to be complemented by strong international measures. However, despite the major economic and policy reforms initiated many of these LDCs, in addition to strong international aid, the economic situation of these countries worsened as a whole in the 1980s. This prompted the organization of a 2nd UN LDC conference almost a decade later.
2nd UN LDC Conference
Held between September 3 and September 14, 1990, once again in Paris, the second UN LDC Conference was convened to measure the progress made by the LDCs towards fulfilling their development goals during the 1980s. Recognizing the problems that plagued the LDCs over the past decade, the conference formulated a new set of national and international policies to accelerate the growth rates of the poorest nations. These new principles were embodied in the “Paris Declaration and Programme of Action for the Least Developed Countries for the 1990s.”
4th UN LDC Conference
The most recent conference, held in May 2011 in Istanbul, recognized that the nature of development had fundamentally changed since the 1st conference held almost 30 years earlier. In the 21st century, the capital flow into emerging economies has increasingly become dominated by foreign direct investment and remittances, as opposed to bilateral and multilateral assistance. Moreover, since the 80s, significant structural changes have taken place on the international stage. With the creation of the G-20 conference of the largest economic powers, including many nations in the Global South, formerly “undeveloped” nations are now able to have a much larger say in international relations. Furthermore, the conference recognized that in the midst of a deep global recession, coupled with multiple crises (energy, climate, food, etc.), the international community would have fewer resources to aid the LDCs. Thus, the UN considered the participation of a wide range of stakeholders (not least the LDCs themselves), crucial to the formulation of the conference.
Organizations Working to End Extreme Poverty.
International Organizations.
World Bank.
In 2013, the Board of Governors of the World Bank Group (WBG) set two overriding goals for the WBG to commit itself to in the future. First, to end extreme poverty by 2030, an objective that echoes the sentiments of the UN and the Obama administration. Additionally, the WBG set an interim target of reducing extreme poverty to below 9 percent by 2020. Second, to focus on growth among the bottom 40 percent of people, as opposed to standard GDP growth. This commitment ensures that the growth of the developing world lifts people out of poverty, rather than exacerbating inequality.
As the World Bank’s primary focus is on delivering economic growth to enable equitable prosperity, its developments programs are primarily commercial-based in nature, as opposed to the UN. Since the World Bank recognizes better jobs will result in higher income and thus, less poverty, the WBG seeks to support employment training initiatives, small business development programs and strong labor protection laws. However, since much of the growth in the developing world has been inequitable, the World Bank has also begun teaming with client states to map out trends in inequality and to propose public policy changes that can level the playing field.
Moreover, the World Bank engages in a variety of nutritional, transfer payments and transport-based initiatives. Children who experience under-nutrition from conception to two years of age have a much higher risk of physical and mental disability. Thus, they are often trapped in poverty and are unable to make a full contribution to the social and economic development of their communities as adults. The WBG estimates that as much as 3% of GDP can be lost as a result of under-nutrition among the poorest nations. To combat undernutrition, the WBG has partnered with UNICEF and the WHO to ensure all small children are fully feed. The WBG also offers conditional cash transfers to poor households who meet certain requirements such as maintaining children’s healthcare or ensuring school attendance. Finally, the WBG understands investment in public transportation and better roads is key to breaking rural isolation, improving access to healthcare and providing better job opportunities for the World’s poor.
UN.
1. OCHA (Office for the Coordination of Humanitarian Affairs)
The Office for the Coordination of Humanitarian Affairs (OCHA) of the United Nations works to synchronize the disparate international, national and non-governmental efforts to contest poverty. The OCHA seeks to prevent “confusion” in relief operations and to ensure that the humanitarian response to disaster situations has greater accountability and predictability. To do so, OCHA has begun deploying Humanitarian Coordinators and Country Teams to provide a solid architecture for the international community to work through.
2. UNICEF (United Nations Children’s Fund)
The United Nation’s Children’s Fund (UNICEF) was created by the UN to provide food, clothing and healthcare to European children facing famine and disease in the immediate aftermath of World War II. After the UN General Assembly extended UNICEF’s mandate indefinitely in 1953, it actively worked to help children in extreme poverty in more than 190 countries and territories to overcome the obstacles that poverty, violence, disease and discrimination place in a child’s path. Its current focus areas are 1) Child survival & development 2) Basic education & gender equality 3) Children and HIV/AIDS and 4) Child protection.
3. UNHCR (The UN Refugee Agency)
The UN Refugee Agency (UNHCR) is mandated to lead and coordinate international action to protect refugees worldwide. Its primary purpose is to safeguard the rights of refugees by ensuring anyone can exercise the right to seek asylum in another state, with the option to return home voluntarily, integrate locally or resettle in a third country. The UNHCR operates in over 125 countries, helping approximately 33.9 million persons.
4. WFP (World Food Program)
The World Food Program (WFP) is the largest agency dedicated to fighting hunger worldwide. On average, WFP brings food assistance to more than 90 million people in 75 countries. The WFP not only strives to prevent hunger in the present, but also in the future by developing stronger communities which will make food even more secure on their own. The WFP has a range of expertise from Food Security Analysis, Nutrition, Food Procurement and Logistics.
5. WHO (World Health Organization)
The World Health Organization (WHO) is responsible for providing leadership on global health matters, shaping the health research agenda, articulating evidence-based policy decisions and combating diseases that are induced from poverty, such as HIV/AIDS, malaria and tuberculosis. Moreover, the WHO deals with pressing issues ranging from managing water safety, to dealing with maternal and newborn health.
Bilateral Organizations.
USAID.
The U.S. Agency for International Development (USAID) is the lead U.S. government agency dedicated to ending extreme poverty. Currently the largest bilateral donor in the world, the United States channels the majority of its “development” assistance through USAID and the U.S. Department of State. In President Obama’s 2013 State of the Union address, he declared “So the United States will join with our allies to eradicate such extreme poverty in the next two decades..which is within our reach.' In response to Obama’s call to action, USAID has made ending extreme poverty central to its mission statement. Under its New Model of Development, USAID seeks to eradicate extreme poverty through the use of innovation in science and technology, by putting a greater emphasis on evidence based decision-making, and through leveraging the ingenuity of the private sector and global citizens.
A major initiative of the Obama Administration is Power Africa, which aims to bring energy to 20 million people in Sub-Saharan Africa. By reaching out to its international partners, whether commercial or public, the US has leveraged over $14 billion in outside commitments after investing only $7 billion USD of its own. To ensure that Power Africa reaches the region's poorest, the initiative engages in a transaction based approach to create systematic change. This includes expanding access to electricity to more than 20,000 additional households which already live without power.
In terms of specific programming, USAID works in a variety of fields from preventing hunger, reducing HIV/AIDS, providing general health assistance and democracy assistance, as well as dealing with gender issues. To deal with food security, which affects roughly 842 million people (who go to bed hungry each night), USAID coordinates the Feed the Future Initiative (FtF). FtF aims to reduce poverty and undernutrition each by 20 percent over five years. Thanks to PEPFAR and a variety of congruent actors, the incidence of AIDS and HIV, which used to ravage Africa, have reduced in scope and intensity. Through PEPFAR, the United States has ensured over five million people have received life-saving antiviral drugs, a significant proportion of the eight million people receiving treatment in relatively poor nations.
In terms of general health assistance, USAID has worked to reduce maternal mortality by 30 percent, under-five child mortality by 35 percent, and has accomplished a host of other goals. USAID also supports the gamut of democratic initiatives, from promoting human rights and accountable, fair governance, to supporting free and fair elections and the rule of law. In pursuit of these goals, USAID has increased global political participation by training more than 9,800 domestic election observers and providing civic education to more than 6.5 million people. Since 2012, the Agency has begun integrating critical gender perspectives across all aspects of its programming to ensure all USAID initiatives work to eliminate gender disparities. To do so, USAID seeks to increase the capability of women and girls to realize their rights and determine their own life outcomes. Moreover, USAID supports additional programs to improve women’s access to capital and markets, builds theirs skills in agriculture, and supports women’s desire to own businesses.
DfID.
The Department for International Development (DfID) is the UK’s lead agency for eradicating extreme poverty. To do so, DfID focuses on the creation of jobs, empowering women and rapidly responding to humanitarian emergencies.
Some specific examples of DfID projects include governance assistance, educational initiatives, and funding cutting-edge research. In 2014 alone, DfID will support “freer and fairer” elections in 13 countries. DfID will also help provide 10 million women with access to justice through strengthened judicial systems and will help 40 million people make their authorities more accountable. By 2015, DfID will have helped 9 million children attend primary school, at least half of which will be girls. Furthermore, through the Research4Development (R4D) project, DfID has funded over 35,000 projects in the name of creating new technologies to help the world’s poorest. These technologies include: vaccines for diseases of African cattle, better diagnostic measures for TB, new drugs for combating malaria, and developing flood-resistant rice. In addition to technological research, the R4D is also used to fund projects that look to understand what specifically about governance structure’s can be tweaked to help the world’s poorest.
Non-Governmental Movements.
NGOs.
A multitude of non-governmental organizations operate in the field of extreme poverty, actively working to alleviate the poorest of the poor of their deprivation. To name but a few notable organizations: Save the Children, The Overseas Development Institute, Concern Worldwide, ONE, and trickleUP have all done a considerable amount of work in extreme poverty.
Save the Children is the leading international organization dedicated to helping the World’s indigent children. In 2013 alone, Save the Children reached over 143 million children through their work, including over 52 million children directly. Save the Children also recently released their own report on “,” in which they argued the international community could feasibly do more than lift the world’s poor above $1.25/day. The Overseas Development Institute (ODI) is the premier UK based think tank on international development and humanitarian issues. ODI is dedicated to alleviating the suffering of the world’s poor by providing high-quality research and practical policy advice to the World’s development officials. ODI also recently released a paper entitled, “,” in which its authors assert that though the international communities’ goal of ending extreme poverty by 2030 is laudable, much more targeted resources will be necessary to reach said target. The report states that “To eradicate extreme poverty, massive global investment is required in social assistance, education and pro-poorest economic growth”
Concern Worldwide is an international humanitarian organization whose mission is to end extreme poverty by influencing decision makers at all levels of government (local -> international). Concern has also produced a report on extreme poverty in which they explain their own conception of extreme poverty from a NGO’s standpoint. In this paper, named “,” the report’s creators write that extreme poverty entails more than just living under $1.25/day, it also includes having a small number of assets and being vulnerable to severe negative shocks (whether natural or man made).
ONE, the organization confounded by Bono, is a non-profit organization funded almost entirely by foundations, individual philanthropists and corporations. ONE’s goals include raising public awareness and working with political leaders to fight preventable diseases, increase government accountability and increase investment in nutrition. Finally, trickleUp is a microenterprise development program targeted at those living under a $1.25/day, which provides the indigent with resources to build a sustainable livelihood through both direct financing and considerable training efforts.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1271'>
Analytical Engine
The Analytical Engine was a proposed mechanical general-purpose computer designed by English mathematician Charles Babbage.
It was first described in 1837 as the successor to Babbage's Difference engine, a design for a mechanical computer. The Analytical Engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete.
Babbage was never able to complete construction of any of his machines due to conflicts with his chief engineer and inadequate funding. It was not until the 1940s that the first general-purpose computers were actually built.
Design.
During Babbage's difference engine project, he realized that a much more general design, the Analytical Engine, was possible. The input (programs and data) was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. It employed ordinary base-10 fixed-point arithmetic.
There was to be a store (that is, a memory) capable of holding 1,000 numbers of 40 decimal digits each (ca. 16.7 kB). An arithmetical unit (the 'mill') would be able to perform all four arithmetic operations, plus comparisons and optionally square roots. Initially it was conceived as a difference engine curved back upon itself, in a generally circular layout, with the long store exiting off to one side. (Later drawings depict a regularized grid layout.) Like the central processing unit (CPU) in a modern computer, the mill would rely upon its own internal procedures, to be stored in the form of pegs inserted into rotating drums called 'barrels', to carry out some of the more complex instructions the user's program might specify.
The programming language to be employed by users was akin to modern day assembly languages. Loops and conditional branching were possible, and so the language as conceived would have been Turing-complete as later defined by Alan Turing. Three different types of punch cards were used: one for arithmetical operations, one for numerical constants, and one for load and store operations, transferring numbers from the store to the arithmetical unit or back. There were three separate readers for the three types of cards.
In 1842, the Italian mathematician Luigi Menabrea, whom Babbage had met while travelling in Italy, wrote a description of the engine in French. In 1843, the description was translated into English and extensively annotated by Ada Byron, Countess of Lovelace, who had become interested in the engine eight years earlier. In recognition of her additions to Menabrea's paper, which included a way to calculate Bernoulli numbers using the machine, she has been described as the first computer programmer. The modern computer programming language Ada is named in her honor.
Construction.
Late in his life, Babbage sought ways to build a simplified version of the machine, and assembled a small part of it before his death in 1871.
In 1878, a committee of the British Association for the Advancement of Science recommended against constructing the Analytical Engine.
In 1910, Babbage's son Henry Prevost Babbage reported that a part of the mill and the printing apparatus had been constructed, and had been used to calculate a (faulty) list of multiples of pi. This constituted only a small part of the whole engine; it was not programmable and had no storage. (Popular images of this section have sometimes been mislabelled, implying that it was the entire mill or even the entire engine.) Henry Babbage's 'Analytical Engine Mill' is on display at the Science Museum in London. Henry also proposed building a demonstration version of the full engine, with a smaller storage capacity: 'perhaps for a first machine ten (columns) would do, with fifteen wheels in each'. Such a version could manipulate 20 numbers of 25 digits each, and what it could be told to do with those numbers could still be impressive. 'It is only a question of cards and time', wrote Henry Babbage in 1888, '.. and there is no reason why (twenty thousand) cards should not be used if necessary, in an Analytical Engine for the purposes of the mathematician'.
In 1991, the London Science Museum built a complete and working specimen of Babbage's Difference Engine No. 2, a design that incorporated refinements Babbage discovered during the development of the Analytical Engine. This machine was built using materials and engineering tolerances that would have been available to Babbage, quelling the suggestion that Babbage's designs could not have been produced using the manufacturing technology of his time.
In October 2010, John Graham-Cumming started a campaign to raise funds by 'public subscription' to enable serious historical and academic study of Babbage's plans, with a view to then build and test a fully working virtual design which will then in turn enable construction of the physical Analytical Engine. As of October 2013, no actual construction had been reported.
Instruction set.
Babbage is not known to have written down an explicit set of instructions for the engine in the manner of a modern processor manual. Instead he showed his programs as lists of states during their execution, showing what operator was run at each step with little indication of how the control flow would be guided. Bromley (see below) has assumed that the card deck could be read in forwards and backwards directions as a function of conditional branching after testing for conditions, which would make the engine Turing-complete:
The introduction for the first time, in 1845, of user operations for a variety of service functions including, most importantly, an effective system for user control of looping in user programs.
There is no indication how the direction of turning of the operation and variable cards is specified. In the absence of other evidence I have had to adopt the minimal default assumption that both the operation and variable cards can only be turned backward as is necessary to implement the loops used in Babbage’s sample programs. There would be no mechanical or microprogramming difficulty in placing the direction of motion under the control of the user.
From 'Bromley, A.G. Babbage's Analytical Engine Plans 28 and 28a. The programmer's interface. Annals of the History of Computing, IEEE. 2000'
In their emulator of the engine, Fourmilab say:
The Engine's Card Reader is not constrained to simply process the cards in a chain one after another from start to finish. It can, in addition, directed by the very cards it reads and advised by the whether the Mill's run-up lever is activated, either advance the card chain forward, skipping the intervening cards, or backward, causing previously-read cards to be processed once again.
This emulator does provide a written symbolic instruction set, though this has been constructed by its authors rather than based on Babbage's original works. For example a factorial program would be written as:
N0 6
N1 1
N2 1
L1
L0
S1
L0
L2
S0
L2
L0
CB?11
where the CB is the conditional branch instruction or 'combination card' used to make the control flow jump, in this case backwards by 11 cards.
Influence.
Predicted influence.
Babbage understood that the existence of an automatic computer would kindle interest in the field now known as algorithmic efficiency, writing in his 'Passages from the Life of a Philosopher', 'As soon as an Analytical Engine exists, it will necessarily guide the future course of the science. Whenever any result is sought by its aid, the question will then arise—By what course of calculation can these results be arrived at by the machine in the 'shortest time'?'
Computer science.
Swedish engineers Georg and Edvard Scheutz, inspired by a description of the difference engine, created a mechanical calculation device based on the design in 1853. Table-sized instead of room-sized, the device was capable of calculating tables, but imperfectly.
From 1872 Henry continued diligently with his father's work and then intermittently in retirement in 1875. Percy Ludgate wrote about the engine in 1915 and even designed his own Analytical Engine (it was drawn up in detail, but never built). Ludgate's engine would be much smaller than Babbage's of about 8 cubic feet (230 L), and hypothetically would be capable of multiplying two 20-decimal-digit numbers in about six seconds.
Despite this ground work, Babbage's work fell into historical obscurity, and the Analytical Engine was unknown to builders of electro-mechanical and electronic computing machines in the 1930s and 1940s when they began their work, resulting in the need to re-invent many of the architectural innovations Babbage had proposed. Howard Aiken, who built the quickly-obsoleted electromechanical calculator, the Harvard Mark I, between 1937 and 1945, praised Babbage's work likely as a way of enhancing his own stature, but knew nothing of the Analytical Engine's architecture during the construction of the Mark I, and considered his visit to the constructed portion of the Analytical Engine 'the greatest disappointment of my life'. The Mark I showed no influence from the Analytical Engine and lacked the Analytical Engine's most prescient architectural feature, conditional branching. J. Presper Eckert and John W. Mauchly similarly were not aware of the details of Babbage's Analytical Engine work prior to the completion of their design for the first electronic general-purpose computer, the ENIAC.
Comparison to other early computers.
If the Analytical Engine had been built, it would have been digital, programmable and Turing-complete. However, it would have been very slow. Ada Lovelace reported in her notes on the Analytical Engine: 'Mr. Babbage believes he can, by his engine, form the product of two numbers, each containing twenty figures, in three minutes'. By comparison the Harvard Mark I could perform the same task in just six seconds. A modern PC can do the same thing in well under a millionth of a second.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1273'>
Augustus
Augustus (; 23 September 63 BC – 19 August 14 AD) was the founder of the Roman Empire and its first Emperor, ruling from 27 BC until his death in 14 AD.
He was born Gaius Octavius into an old and wealthy equestrian branch of the plebeian Octavii family. Following the assassination of his maternal great-uncle Julius Caesar in 44 BC, Caesar's will named Octavius as his adopted son and heir. Together with Mark Antony and Marcus Lepidus, he formed the Second Triumvirate to defeat the assassins of Caesar. Following their victory at Philippi, the Triumvirate divided the Roman Republic among themselves and ruled as military dictators. The Triumvirate was eventually torn apart under the competing ambitions of its members: Lepidus was driven into exile and stripped of his position, and Antony committed suicide following his defeat at the Battle of Actium by Augustus in 31 BC.
After the demise of the Second Triumvirate, Augustus restored the outward facade of the free Republic, with governmental power vested in the Roman Senate, the executive magistrates, and the legislative assemblies. In reality, however, he retained his autocratic power over the Republic as a military dictator. By law, Augustus held a collection of powers granted to him for life by the Senate, including supreme military command, and those of tribune and censor. It took several years for Augustus to develop the framework within which a formally republican state could be led under his sole rule. He rejected monarchical titles, and instead called himself 'Princeps Civitatis' ('First Citizen of the State'). The resulting constitutional framework became known as the Principate, the first phase of the Roman Empire.
The reign of Augustus initiated an era of relative peace known as the 'Pax Romana' ('The Roman Peace'). Despite continuous wars of imperial expansion on the Empire's frontiers and one year-long civil war over the imperial succession, the Roman world was largely free from large-scale conflict for more than two centuries. Augustus dramatically enlarged the Empire, annexing Egypt, Dalmatia, Pannonia, Noricum, and Raetia, expanded possessions in Africa, expanded into Germania, and completed the conquest of Hispania.
Beyond the frontiers, he secured the Empire with a buffer region of client states, and made peace with the Parthian Empire through diplomacy. He reformed the Roman system of taxation, developed networks of roads with an official courier system, established a standing army, established the Praetorian Guard, created official police and fire-fighting services for Rome, and rebuilt much of the city during his reign.
Augustus died in 14 AD at the age of 75. He may have died from natural causes, although there were unconfirmed rumors that his wife Livia poisoned him. He was succeeded as Emperor by his adopted son (also stepson and former son-in-law), Tiberius.
Name.
Throughout his life, the man historians refer to as Augustus (;
) was known by many names:
Early life.
While his paternal family was from the town of Velletri, approximately from Rome, Augustus was born in the city of Rome on 23 September 63 BC. He was born at Ox Head, a small property on the Palatine Hill, very close to the Roman Forum. He was given the name Gaius Octavius Thurinus, his cognomen possibly commemorating his father's victory at Thurii over a rebellious band of slaves.
Due to the crowded nature of Rome at the time, Octavius was taken to his father's home village at Velletri to be raised. Octavius only mentions his father's equestrian family briefly in his memoirs. His paternal great-grandfather was a military tribune in Sicily during the Second Punic War. His grandfather had served in several local political offices. His father, also named Gaius Octavius, had been governor of Macedonia. His mother, Atia, was the niece of Julius Caesar.
In 59 BC, when he was four years old, his father died. His mother married a former governor of Syria, Lucius Marcius Philippus. Philippus claimed descent from Alexander the Great, and was elected consul in 56 BC. Philippus never had much of an interest in young Octavius. Because of this, Octavius was raised by his grandmother (and Julius Caesar's sister), Julia Caesaris.
In 52 or 51 BC, Julia Caesaris died. Octavius delivered the funeral oration for his grandmother. From this point, his mother and stepfather took a more active role in raising him. He donned the 'toga virilis' four years later, and was elected to the College of Pontiffs in 47 BC. The following year he was put in charge of the Greek games that were staged in honor of the Temple of Venus Genetrix, built by Julius Caesar. According to Nicolaus of Damascus, Octavius wished to join Caesar's staff for his campaign in Africa, but gave way when his mother protested. In 46 BC, she consented for him to join Caesar in Hispania, where he planned to fight the forces of Pompey, Caesar's late enemy, but Octavius fell ill and was unable to travel.
When he had recovered, he sailed to the front, but was shipwrecked; after coming ashore with a handful of companions, he crossed hostile territory to Caesar's camp, which impressed his great-uncle considerably. Velleius Paterculus reports that after that time, Caesar allowed the young man to share his carriage. When back in Rome, Caesar deposited a new will with the Vestal Virgins, naming Octavius as the prime beneficiary.
Rise to power.
Heir to Caesar.
At the time Caesar was killed on the Ides of March (15 March) 44 BC, Octavius was studying and undergoing military training in Apollonia, Illyria. Rejecting the advice of some army officers to take refuge with the troops in Macedonia, he sailed to Italia to ascertain whether he had any potential political fortunes or security. After landing at Lupiae near Brundisium, he learned the contents of Caesar's will, and only then did he decide to become Caesar's political heir as well as heir to two-thirds of his estate.
Caesar, having no living legitimate children under Roman law, had adopted his grand-nephew Octavius as his son and main heir. Upon his adoption, Octavius assumed his great-uncle's name, Gaius Julius Caesar. Although Romans who had been adopted into a new family usually retained their old nomen in cognomen form (e.g. 'Octavianus' for one who had been an Octavius, 'Aemilianus' for one who had been an Aemilius, etc.) there is no evidence that he ever bore the name 'Octavianus', as it would have made his modest origins too obvious.
Despite the fact that he never officially bore the name 'Octavianus', however, to save confusing the dead dictator with his heir, historians often refer to the new Caesar—between his adoption and his assumption, in 27 BC, of the name Augustus—as 'Octavian'. Mark Antony later charged that Octavian had earned his adoption by Caesar through sexual favours, though Suetonius, in his work 'Lives of the Twelve Caesars', describes Antony's accusation as political slander.
To make a successful entry into the upper echelons of the Roman political hierarchy, Octavian could not rely on his limited funds. After a warm welcome by Caesar's soldiers at Brundisium, Octavian demanded a portion of the funds that were allotted by Caesar for the intended war against Parthia in the Middle East. This amounted to 700 million sesterces stored at Brundisium, the staging ground in Italy for military operations in the east.
A later senatorial investigation into the disappearance of the public funds made no action against Octavian, since he subsequently used that money to raise troops against the Senate's arch enemy, Mark Antony. Octavian made another bold move in 44 BC when without official permission he appropriated the annual tribute that had been sent from Rome's Near Eastern province to Italy.
Octavian began to bolster his personal forces with Caesar's veteran legionaries and with troops designated for the Parthian war, gathering support by emphasizing his status as heir to Caesar. On his march to Rome through Italy, Octavian's presence and newly acquired funds attracted many, winning over Caesar's former veterans stationed in Campania. By June he had gathered an army of 3,000 loyal veterans, paying each a salary of 500 denarii.
Arriving in Rome on 6 May 44 BC, Octavian found the consul Mark Antony, Caesar's former colleague, in an uneasy truce with the dictator's assassins; they had been granted a general amnesty on 17 March, yet Antony succeeded in driving most of them out of Rome. This was due to his 'inflammatory' eulogy given at Caesar's funeral, mounting public opinion against the assassins.
Although Mark Antony was amassing political support, Octavian still had opportunity to rival him as the leading member of the faction supporting Caesar. Mark Antony had lost the support of many Romans and supporters of Caesar when he, at first, opposed the motion to elevate Caesar to divine status. Octavian failed to persuade Antony to relinquish Caesar's money to him. During the summer he managed to win support from Caesarian sympathizers, however, who saw the younger heir as the lesser evil and hoped to manipulate him, or to bear with him during their efforts to get rid of Antony.
Octavian began to make common cause with the Optimates, the former enemies of Caesar. In September, the leading Optimate orator Marcus Tullius Cicero began to attack Antony in a series of speeches portraying him as a threat to the Republican order. With opinion in Rome turning against him and his year of consular power nearing its end, Antony attempted to pass laws which would lend him control over Cisalpine Gaul, which had been assigned as part of his province, from Decimus Junius Brutus Albinus, one of Caesar's assassins.
Octavian meanwhile built up a private army in Italy by recruiting Caesarian veterans, and on 28 November won over two of Antony's legions with the enticing offer of monetary gain. In the face of Octavian's large and capable force, Antony saw the danger of staying in Rome, and to the relief of the Senate, he fled to Cisalpine Gaul, which was to be handed to him on 1 January.
First conflict with Antony.
After Decimus Brutus refused to give up Cisalpine Gaul, Antony besieged him at Mutina. The resolutions passed by the Senate to stop the violence were rejected by Antony, as the Senate had no army of its own to challenge him; this provided an opportunity for Octavian, who already was known to have armed forces. Cicero also defended Octavian against Antony's taunts about Octavian's lack of noble lineage and aping of Julius Caesar's name; he stated 'we have no more brilliant example of traditional piety among our youth.'
At the urging of Cicero, the Senate inducted Octavian as senator on 1 January 43 BC, yet he also was given the power to vote alongside the former consuls. In addition, Octavian was granted 'imperium' (commanding power), which made his command of troops legal, sending him to relieve the siege along with Hirtius and Pansa (the consuls for 43 BC). In April 43 BC, Antony's forces were defeated at the battles of Forum Gallorum and Mutina, forcing Antony to retreat to Transalpine Gaul. Both consuls were killed, however, leaving Octavian in sole command of their armies.
After heaping many more rewards on Decimus Brutus than on Octavian for defeating Antony, the Senate attempted to give command of the consular legions to Decimus Brutus, yet Octavian decided not to cooperate. Instead, Octavian stayed in the Po Valley and refused to aid any further offensive against Antony. In July, an embassy of centurions sent by Octavian entered Rome and demanded that he receive the consulship left vacant by Hirtius and Pansa.
Octavian also demanded that the decree declaring Antony a public enemy should be rescinded. When this was refused, he marched on the city with eight legions. He encountered no military opposition in Rome, and on 19 August 43 BC was elected consul with his relative Quintus Pedius as co-consul. Meanwhile, Antony formed an alliance with Marcus Aemilius Lepidus, another leading Caesarian.
Second Triumvirate.
Proscriptions.
In a meeting near Bologna in October 43 BC, Octavian, Antony, and Lepidus formed a junta called the Second Triumvirate. This explicit arrogation of special powers lasting five years was then supported by law passed by the plebs, unlike the unofficial First Triumvirate formed by Gnaeus Pompey Magnus, Julius Caesar, and Marcus Licinius Crassus. The triumvirs then set in motion proscriptions in which 300 senators and 2,000 'equites', allegedly were branded as outlaws and deprived of their property and, for those who failed to escape, their lives.
The estimation that 300 senators were proscribed was presented by Appian, although his earlier contemporary Livy asserted that only 130 senators had been proscribed. This decree issued by the triumvirate was motivated in part by a need to raise money to pay the salaries of their troops for the upcoming conflict against Caesar's assassins, Marcus Junius Brutus and Gaius Cassius Longinus. Rewards for their arrest gave incentive for Romans to capture those proscribed, while the assets and properties of those arrested were seized by the triumvirs.
Contemporary Roman historians provide conflicting reports as to which triumvir was more responsible for the proscriptions and killing, however, the sources agree that enacting the proscriptions was a means by all three factions to eliminate political enemies. Marcus Velleius Paterculus asserted that Octavian tried to avoid proscribing officials whereas Lepidus and Antony were to blame for initiating them. Cassius Dio defended Octavian as trying to spare as many as possible, whereas Antony and Lepidus, being older and involved in politics longer, had many more enemies to deal with.
This claim was rejected by Appian, who maintained that Octavian shared an equal interest with Lepidus and Antony in eradicating his enemies. Suetonius presented the case that Octavian, although reluctant at first to proscribe officials, nonetheless pursued his enemies with more rigor than the other triumvirs. Plutarch described the proscriptions as a ruthless and cutthroat swapping of friends and family among Antony, Lepidus, and Octavian. For example, Octavian allowed the proscription of his ally Cicero, Antony the proscription of his maternal uncle Lucius Julius Caesar (the consul of 64 BC), and Lepidus his brother Paullus.
Battle of Philippi and division of territory.
On 1 January 42 BC, the Senate posthumously recognized Julius Caesar as a divinity of the Roman state, 'Divus Iulius'. Octavian was able to further his cause by emphasizing the fact that he was 'Divi filius', 'Son of God'. Antony and Octavian then sent 28 legions by sea to face the armies of Brutus and Cassius, who had built their base of power in Greece. After two battles at Philippi in Macedonia in October 42, the Caesarian army was victorious and Brutus and Cassius committed suicide. Mark Antony would later use the examples of these battles as a means to belittle Octavian, as both battles were decisively won with the use of Antony's forces. In addition to claiming responsibility for both victories, Antony also branded Octavian as a coward for handing over his direct military control to Marcus Vipsanius Agrippa instead.
After Philippi, a new territorial arrangement was made among the members of the Second Triumvirate. While Antony placed Gaul, the provinces of Hispania, and Italia in the hands of Octavian, Antony traveled east to Egypt where he allied himself with Queen Cleopatra VII, the former lover of Julius Caesar and mother of Caesar's infant son, Caesarion. Lepidus was left with the province of Africa, stymied by Antony, who conceded Hispania to Octavian instead.
Octavian was left to decide where in Italy to settle the tens of thousands of veterans of the Macedonian campaign, whom the triumvirs had promised to discharge. The tens of thousands who had fought on the republican side with Brutus and Cassius, who could easily ally with a political opponent of Octavian if not appeased, also required land. There was no more government-controlled land to allot as settlements for their soldiers, so Octavian had to choose one of two options: alienating many Roman citizens by confiscating their land, or alienating many Roman soldiers who could mount a considerable opposition against him in the Roman heartland. Octavian chose the former. There were as many as eighteen Roman towns affected by the new settlements, with entire populations driven out or at least given partial evictions.
Rebellion and marriage alliances.
Widespread dissatisfaction with Octavian over these settlements of his soldiers encouraged many to rally at the side of Lucius Antonius, who was brother of Mark Antony and supported by a majority in the Senate. Meanwhile, Octavian asked for a divorce from Clodia Pulchra, the daughter of Fulvia and her first husband Publius Clodius Pulcher. Claiming that his marriage with Clodia had never been consummated, he returned her to her mother, Mark Antony's wife. Fulvia decided to take action. Together with Lucius Antonius, she raised an army in Italy to fight for Antony's rights against Octavian. Lucius and Fulvia took a political and martial gamble in opposing Octavian, however, since the Roman army still depended on the triumvirs for their salaries. Lucius and his allies ended up in a defensive siege at Perusia (modern Perugia), where Octavian forced them into surrender in early 40 BC.
Lucius and his army were spared, due to his kinship with Antony, the strongman of the East, while Fulvia was exiled to Sicyon. Octavian showed no mercy, however, for the mass of allies loyal to Lucius; on 15 March, the anniversary of Julius Caesar's assassination, he had 300 Roman senators and equestrians executed for allying with Lucius. Perusia also was pillaged and burned as a warning for others. This bloody event sullied Octavian's reputation and was criticized by many, such as the Augustan poet Sextus Propertius.
Sextus Pompeius, son of the First Triumvir Pompey and still a renegade general following Julius Caesar's victory over his father, was established in Sicily and Sardinia as part of an agreement reached with the Second Triumvirate in 39 BC. Both Antony and Octavian were vying for an alliance with Pompeius, who, ironically, was a member of the republican party, not the Caesarian faction. Octavian succeeded in a temporary alliance when in 40 BC he married Scribonia, a daughter of Lucius Scribonius Libo who was a follower of Pompeius as well as his father-in-law. Scribonia gave birth to Octavian's only natural child, Julia, who was born the same day that he divorced her to marry Livia Drusilla, little more than a year after their marriage.
While in Egypt, Antony had been engaged in an affair with Cleopatra and had fathered three children with her. Aware of his deteriorating relationship with Octavian, Antony left Cleopatra; he sailed to Italy in 40 BC with a large force to oppose Octavian, laying siege to Brundisium. This new conflict proved untenable for both Octavian and Antony, however. Their centurions, who had become important figures politically, refused to fight due to their Caesarian cause, while the legions under their command followed suit. Meanwhile in Sicyon, Antony's wife Fulvia died of a sudden illness while Antony was en route to meet her. Fulvia's death and the mutiny of their centurions allowed the two remaining triumvirs to effect a reconciliation.
In the autumn of 40, Octavian and Antony approved the Treaty of Brundisium, by which Lepidus would remain in Africa, Antony in the East, Octavian in the West. The Italian peninsula was left open to all for the recruitment of soldiers, but in reality, this provision was useless for Antony in the East. To further cement relations of alliance with Mark Antony, Octavian gave his sister, Octavia Minor, in marriage to Antony in late 40 BC. During their marriage, Octavia gave birth to two daughters (known as Antonia Major and Antonia Minor).
War with Pompeius.
Sextus Pompeius threatened Octavian in Italy by denying to the peninsula shipments of grain through the Mediterranean; Pompeius' own son was put in charge as naval commander in the effort to cause widespread famine in Italy. Pompeius' control over the sea prompted him to take on the name 'Neptuni filius', 'son of Neptune'. A temporary peace agreement was reached in 39 BC with the treaty of Misenum; the blockade on Italy was lifted once Octavian granted Pompeius Sardinia, Corsica, Sicily, and the Peloponnese, and ensured him a future position as consul for 35 BC.
The territorial agreement between the triumvirate and Sextus Pompeius began to crumble once Octavian divorced Scribonia and married Livia on 17 January 38 BC. One of Pompeius' naval commanders betrayed him and handed over Corsica and Sardinia to Octavian. Octavian lacked the resources to confront Pompeius alone, however, so an agreement was reached with the Second Triumvirate's extension for another five-year period beginning in 37 BC.
In supporting Octavian, Antony expected to gain support for his own campaign against Parthia, desiring to avenge Rome's defeat at Carrhae in 53 BC. In an agreement reached at Tarentum, Antony provided 120 ships for Octavian to use against Pompeius, while Octavian was to send 20,000 legionaries to Antony for use against Parthia. Octavian sent only a tenth the number of those promised, however, which Antony viewed as an intentional provocation.
Octavian and Lepidus launched a joint operation against Sextus in Sicily in 36 BC. Despite setbacks for Octavian, the naval fleet of Sextus Pompeius was almost entirely destroyed on 3 September by general Agrippa at the naval battle of Naulochus. Sextus fled with his remaining forces to the east, where he was captured and executed in Miletus by one of Antony's generals the following year. As Lepidus and Octavian accepted the surrender of Pompeius' troops, Lepidus attempted claim Sicily for himself, ordering Octavian to leave. Lepidus' troops deserted him, however, and defected to Octavian since they were weary of fighting and found Octavian's promises of money to be enticing.
Lepidus surrendered to Octavian and was permitted to retain the office of 'pontifex maximus' (head of the college of priests), but was ejected from the Triumvirate, his public career at an end, and effectively was exiled to a villa at Cape Circei in Italy. The Roman dominions were now divided between Octavian in the West and Antony in the East. To maintain peace and stability in his portion of the Empire, Octavian ensured Rome's citizens of their rights to property. This time he settled his discharged soldiers outside of Italy while returning 30,000 slaves to former Roman owners that had previously fled to Pompeius to join his army and navy. To ensure his own safety and that of Livia and Octavia once he returned to Rome, Octavian had the Senate grant him, his wife, and his sister tribunal immunity, or 'sacrosanctitas'.
War with Antony.
Meanwhile, Antony's campaign against Parthia turned disastrous, tarnishing his image as a leader, and the mere 2,000 legionaries sent by Octavian to Antony were hardly enough to replenish his forces. On the other hand, Cleopatra could restore his army to full strength, and since he already was engaged in a romantic affair with her, he decided to send Octavia back to Rome. Octavian used this to spread propaganda implying that Antony was becoming less than Roman because he rejected a legitimate Roman spouse for an 'Oriental paramour'. In 36 BC, Octavian used a political ploy to make himself look less autocratic and Antony more the villain by proclaiming that the civil wars were coming to an end, and that he would step down as triumvir, if only Antony would do the same; Antony refused.
After Roman troops captured the Kingdom of Armenia in 34 BC, Antony made his son Alexander Helios the ruler of Armenia; he also awarded the title 'Queen of Kings' to Cleopatra, acts which Octavian used to convince the Roman Senate that Antony had ambitions to diminish the preeminence of Rome. When Octavian became consul once again on 1 January 33 BC, he opened the following session in the Senate with a vehement attack on Antony's grants of titles and territories to his relatives and to his queen.
The breach between Antony and Octavian prompted a large portion of the Senators as well as both of that year's consuls to leave Rome and defect to Antony; however Octavian received two key deserters from Antony in the autumn of 32 BC. These defectors, Munatius Plancus and Marcus Titius, gave Octavian the information he needed to confirm with the Senate all the accusations he made against Antony.
Octavian forcibly entered the temple of the Vestal Virgins and seized Antony's secret will, which he promptly publicized. The will would have given away Roman-conquered territories as kingdoms for his sons to rule, and designated Alexandria as the site for a tomb for him and his queen. In late 32 BC, the Senate officially revoked Antony's powers as consul and declared war on Cleopatra's regime in Egypt.
In early 31 BC, while Antony and Cleopatra were temporarily stationed in Greece, Octavian gained a preliminary victory when the navy under the command of Agrippa successfully ferried troops across the Adriatic Sea. While Agrippa cut off Antony and Cleopatra's main force from their supply routes at sea, Octavian landed on the mainland opposite the island of Corcyra (modern Corfu) and marched south. Trapped on land and sea, deserters of Antony's army fled to Octavian's side daily while Octavian's forces were comfortable enough to make preparations.
In a desperate attempt to break free of the naval blockade, Antony's fleet sailed through the bay of Actium on the western coast of Greece. It was there that Antony's fleet faced the much larger fleet of smaller, more maneuverable ships under commanders Agrippa and Gaius Sosius in the battle of Actium on 2 September 31 BC. Antony and his remaining forces were spared only due to a last-ditch effort by Cleopatra's fleet that had been waiting nearby.
Octavian pursued them, and after another defeat in Alexandria on 1 August 30 BC, Antony and Cleopatra committed suicide; Antony fell on his own sword and was taken by his soldiers back to Alexandria where he died in Cleopatra's arms. Cleopatra died soon after, reputedly by the venomous bite of an asp or by poison. Having exploited his position as Caesar's heir to further his own political career, Octavian was only too well aware of the dangers in allowing another to do so and, following the advice of Arius Didymus that 'two Caesars are one too many', he ordered Caesarion—Julius Caesar's son by Cleopatra—to be killed, whilst sparing Cleopatra's children by Antony, with the exception of Antony's older son.
Octavian had previously shown little mercy to surrendered enemies and acted in ways that had proven unpopular with the Roman people, yet he was given credit for pardoning many of his opponents after the Battle of Actium.
Octavian becomes Augustus.
After Actium and the defeat of Antony and Cleopatra, Octavian was in a position to rule the entire Republic under an unofficial principate, but would have to achieve this through incremental power gains, courting the Senate and the people, while upholding the republican traditions of Rome, to appear that he was not aspiring to dictatorship or monarchy. Marching into Rome, Octavian and Marcus Agrippa were elected as dual consuls by the Senate.
Years of civil war had left Rome in a state of near lawlessness, but the Republic was not prepared to accept the control of Octavian as a despot. At the same time, Octavian could not simply give up his authority without risking further civil wars amongst the Roman generals, and even if he desired no position of authority whatsoever, his position demanded that he look to the well-being of the city of Rome and the Roman provinces. Octavian's aims from this point forward were to return Rome to a state of stability, traditional legality and civility by lifting the overt political pressure imposed on the courts of law and ensuring free elections in name at least.
First settlement.
In 27 BC, Octavian made a show of returning full power to the Roman Senate and relinquishing his control of the Roman provinces and their armies. Under his consulship, however, the Senate had little power in initiating legislation by introducing bills for senatorial debate. Although Octavian was no longer in direct control of the provinces and their armies, he retained the loyalty of active duty soldiers and veterans alike. The careers of many clients and adherents depended on his patronage, as his financial power in the Roman Republic was unrivaled. The historian Werner Eck states:
The sum of his power derived first of all from various powers of office delegated to him by the Senate and people, secondly from his immense private fortune, and thirdly from numerous patron-client relationships he established with individuals and groups throughout the Empire. All of them taken together formed the basis of his 'auctoritas', which he himself emphasized as the foundation of his political actions.
To a large extent the public was aware of the vast financial resources Augustus commanded. When he failed to encourage enough senators to finance the building and maintenance of networks of roads in Italy, he undertook direct responsibility for them in 20 BC. This was publicized on the Roman currency issued in 16 BC, after he donated vast amounts of money to the 'aerarium Saturni', the public treasury.
According to H.H. Scullard, however, Augustus's power was based on the exercise of 'a predominant military power and .. the ultimate sanction of his authority was force, however much the fact was disguised.'
The Senate proposed to Octavian, the victor of Rome's civil wars, that he once again assume command of the provinces. The Senate's proposal was a ratification of Octavian's extra-constitutional power. Through the Senate, Octavian was able to continue the appearance of a still-functional constitution. Feigning reluctance, he accepted a ten-year responsibility of overseeing provinces that were considered chaotic.
The provinces ceded to him, that he might pacify them within the promised ten-year period, comprised much of the conquered Roman world, including all of Hispania and Gaul, Syria, Cilicia, Cyprus, and Egypt. Moreover, command of these provinces provided Octavian with control over the majority of Rome's legions.
While Octavian acted as consul in Rome, he dispatched senators to the provinces under his command as his representatives to manage provincial affairs and ensure his orders were carried out. On the other hand, the provinces not under Octavian's control were overseen by governors chosen by the Roman Senate. Octavian became the most powerful political figure in the city of Rome and in most of its provinces, but did not have sole monopoly on political and martial power.
The Senate still controlled North Africa, an important regional producer of grain, as well as Illyria and Macedonia, two martially strategic regions with several legions. However, with control of only five or six legions distributed amongst three senatorial proconsuls, compared to the twenty legions under the control of Augustus, the Senate's control of these regions did not amount to any political or military challenge to Octavian.
The Senate's control over some of the Roman provinces helped maintain a republican façade for the autocratic Principate. Also, Octavian's control of entire provinces for the objective of securing peace and creating stability followed Republican-era precedents, in which such prominent Romans as Pompey had been granted similar military powers in times of crisis and instability.
On 16 January 27 BC the Senate gave Octavian the new titles of 'Augustus' and 'Princeps'. 'Augustus,' from the Latin word 'Augere' (meaning to increase), can be translated as 'the illustrious one'. It was a title of religious rather than political authority.
According to Roman religious beliefs, the title symbolized a stamp of authority over humanity—and in fact nature—that went beyond any constitutional definition of his status. After the harsh methods employed in consolidating his control, the change in name would also serve to demarcate his benign reign as Augustus from his reign of terror as Octavian. His new title of Augustus was also more favorable than 'Romulus', the previous one he styled for himself in reference to the story of Romulus and Remus (founders of Rome), which would symbolize a second founding of Rome.
However, the title of 'Romulus' was associated too strongly with notions of monarchy and kingship, an image Octavian tried to avoid. 'Princeps', comes from the Latin phrase 'primum caput', 'the first head', originally meaning the oldest or most distinguished senator whose name would appear first on the senatorial roster; in the case of Augustus it became an almost regnal title for a leader who was first in charge. 'Princeps' had also been a title under the Republic for those who had served the state well; for example, Pompey had held the title. Augustus also styled himself as 'Imperator Caesar divi filius', 'Commander Caesar son of the deified one'.
With this title he not only boasted his familial link to deified Julius Caesar, but the use of 'Imperator' signified a permanent link to the Roman tradition of victory. The word 'Caesar' was merely a cognomen for one branch of the Julian family, yet Augustus transformed 'Caesar' into a new family line that began with him.
Augustus was granted the right to hang the 'corona civica', the 'civic crown' made from oak, above his door and have laurels drape his doorposts. This crown was usually held above the head of a Roman general during a triumph, with the individual holding the crown charged to continually repeat 'memento mori', or, 'Remember, you are mortal', to the triumphant general. Additionally, laurel wreaths were important in several state ceremonies, and crowns of laurel were rewarded to champions of athletic, racing, and dramatic contests. Thus, both the laurel and the oak were integral symbols of Roman religion and statecraft; placing them on Augustus' doorposts was tantamount to declaring his home the capital. However, Augustus renounced flaunting insignia of power such as holding a scepter, wearing a diadem, or wearing the golden crown and purple toga of his predecessor Julius Caesar. If he refused to symbolize his power by donning and bearing these items on his person, the Senate nonetheless awarded him with a golden shield displayed in the meeting hall of the Curia, bearing the inscription 'virtus', 'pietas', 'clementia', 'iustitia'—'valor, piety, clemency, and justice.'
Second settlement.
By 23 BC, some of the implications of the settlement of 27 BC were becoming apparent. Augustus' holding of an annual consulate made his dominance over the Roman political system too obvious, whilst at the same time halving the opportunities for others to achieve what was still purported to be the head of the Roman state. Further, his desire to have his nephew Marcus Claudius Marcellus follow in his footsteps and eventually assume the Principate in his turn was causing political problems and alienating his three biggest supporters – Agrippa, Maecenas and Livia. Feeling pressure from his own core group of adherents, Augustus turned to the Senate in an attempt to bolster his support there, especially with the Republicans; after his choice for co-consul in 23 BC, Aulus Terentius Varro Murena died before taking office he appointed the noted Republican Calpurnius Piso, who had fought against Julius Caesar and supported Cassius and Brutus.
In the late spring Augustus suffered a severe illness, and on his supposed deathbed made arrangements that would ensure the continuation of the Principate in some form, whilst at the same time put in doubt the senators' suspicions of his anti-republicanism. Augustus prepared to hand down his signet ring to his favored general Agrippa. However, Augustus handed over to his co-consul Piso all of his official documents, an account of public finances, and authority over listed troops in the provinces while Augustus' supposedly favored nephew Marcellus came away empty-handed. This was a surprise to many who believed Augustus would have named an heir to his position as an unofficial emperor.
Augustus bestowed only properties and possessions to his designated heirs, as an obvious system of institutionalized imperial inheritance would have provoked resistance and hostility amongst the republican-minded Romans fearful of monarchy. With regards to the Principate, it was obvious to Augustus that Marcellus was not ready to take on his position; nonetheless, by giving his signet ring to Agrippa, it was Augustus' intent to signal to the legions that Agrippa was to be his successor, and that no matter what the constitutional rules were, they would continue to obey Agrippa.
Soon after his bout of illness subsided, Augustus gave up his permanent consulship. The only other times Augustus would serve as consul would be in the years 5 and 2 BC, both times to introduce his grandsons into public life. Although he had resigned as consul, Augustus retained his consular 'imperium', leading to a second compromise between him and the Senate known as the Second Settlement. This was a clever ploy by Augustus; by stepping down as one of two consuls, this allowed aspiring senators a better chance to fill that position, while at the same time Augustus could 'exercise wider patronage within the senatorial class.'
Augustus was no longer in an official position to rule the state, yet his dominant position over the Roman provinces remained unchanged as he became a proconsul. When he was a consul he had the power to intervene, when he deemed necessary, with the affairs of provincial proconsuls appointed by the Senate throughout the empire. As a proconsul he would ordinarily have lost this power; he wanted to keep it, so 'imperium proconsulare maius', or consular imperium that was 'more' (maius) than the other proconsuls held was granted to Augustus by the Senate. This in effect gave him power over all the proconsuls in the empire. The existence of 'imperium proconsulare maius' is debated by scholars, and it is also argued that he was only granted 'imperium proconsulare aequum', or power equal to that of the governors, but his supreme influence allowed him to control the affairs of the provinces.
Augustus was also granted the power of a tribune ('tribunicia potestas') for life, though not the official title of tribune. Legally it was closed to patricians, a status that Augustus had acquired years ago when adopted by Julius Caesar. This allowed him to convene the Senate and people at will and lay business before it, veto the actions of either the Assembly or the Senate, preside over elections, and the right to speak first at any meeting. Also included in Augustus' tribunician authority were powers usually reserved for the Roman censor; these included the right to supervise public morals and scrutinize laws to ensure they were in the public interest, as well as the ability to hold a census and determine the membership of the Senate.
With the powers of a censor, Augustus appealed to virtues of Roman patriotism by banning all other attire besides the classic toga while entering the Forum. There was no precedent within the Roman system for combining the powers of the tribune and the censor into a single position, nor was Augustus ever elected to the office of censor. Julius Caesar had been granted similar powers, wherein he was charged with supervising the morals of the state, however this position did not extend to the censor's ability to hold a census and determine the Senate's roster. The office of the 'tribunus plebis' began to lose its prestige due to Augustus' amassing of tribunal powers, so he revived its importance by making it a mandatory appointment for any plebeian desiring the praetorship.
In addition to tribunician authority, Augustus was granted sole 'imperium' within the city of Rome itself. Traditionally, proconsuls, or Roman province governors, lost their proconsular 'imperium' when they crossed the Pomerium - the sacred boundary of Rome - and entered the city. In these situations, Augustus would have power as part of his tribunician authority but his constitutional imperium within the Pomerium would be less than that of a serving consul. That would mean that when he was in the city he may not be the constitutional magistrate with the most authority. Thanks to his prestige, or 'auctoritas', his wishes would usually be obeyed, but there may be some awkwardness. To fill this power gap, the Senate voted that Augustus's imperium proconsulare maius should not lapse when he was inside the city walls and thus, all armed forces in the city, formerly under the control of the prefects and consuls, were now under the sole authority of Augustus. With 'maius imperium proconsulare', Augustus was the only individual able to receive a triumph as he was legally the head of every Roman army. In 19 BC, Lucius Cornelius Balbus, governor of Africa and conqueror of the Garamantes, was the first man of provincial origin to receive this award, as well as the last.
For every following Roman victory the credit was given to Augustus, because Rome's armies were commanded by the legatus, who were deputies of the princeps in the provinces. Augustus' eldest son by marriage to Livia, Tiberius, was the only exception to this rule when he received a triumph for victories in Germania in 7 BC. Ensuring that his status of 'maius imperium proconsulare' was renewed in 13 BC, Augustus stayed in Rome during the renewal process and provided veterans with lavish donations to gain their support.
Many of the political subtleties of the Second Settlement seem to have evaded the comprehension of the Plebeian class. When Augustus failed to stand for election as consul in 22 BC, fears arose once again that Augustus was being forced from power by the aristocratic Senate. In 22, 21, and 19 BC, the people rioted in response, and only allowed a single consul to be elected for each of those years, ostensibly to leave the other position open for Augustus. In 22 BC there was a food shortage in Rome which sparked panic, while many urban plebs called for Augustus to take on dictatorial powers to personally oversee the crisis.
After a theatrical display of refusal before the Senate, Augustus finally accepted authority over Rome's grain supply 'by virtue of his proconsular 'imperium', and ended the crisis almost immediately. It was not until AD 8 that a food crisis of this sort prompted Augustus to establish a 'praefectus annonae', a permanent prefect who was in charge of procuring food supplies for Rome.
Nevertheless, there were some who were concerned by the expansion of powers granted to Augustus by the Second Settlement, and this came to a head with the apparent conspiracy of Fannius Caepio and Lucius Lucinius Varro Murena. In early 22 BC, charges were brought against Marcus Primus, the former proconsul (governor) of Macedonia, of waging a war on the Odrysian kingdom of Thrace, whose king was a Roman ally, without prior approval of the Senate. He was defended by Murena, who told the trial that his client had received specific instructions from Augustus, ordering him to attack the client state. Later, Primus testified that the orders came from the recently deceased Marcellus.
Under the Constitutional settlement of 27 BC, i.e., before Augustus was granted imperium proconsulare maius, such orders, had they been given, would have been considered a breach of the Senate's prerogative, as Macedonia was under the Senate's jurisdiction, not that of the Princeps. Such an action would have ripped away the veneer of Republican restoration as promoted by Augustus, and exposed his fraud of merely being the first citizen, a first among equals. Even worse, the involvement of Marcellus provided some measure of proof that Augustus's policy was to have the youth take his place as Princeps, instituting a form of monarchy – accusations that had already played out during the crisis of 23 BC.
The situation was so serious that Augustus himself appeared at the trial, even though he had not been called as a witness. Under oath, Augustus declared that he gave no such order. Murena, disbelieving Augustus's testimony and resentful of his attempt to subvert the trial by using his 'auctoritas', rudely demanded to know why Augustus had turned up to a trial to which he had not been called; Augustus replied that he came in the public interest. Although Primus was found guilty, some jurors voted to acquit, meaning that not everybody believed Augustus's testimony.
Then, sometime prior to 1 September 22 BC a certain Castricius provided Augustus with information about a conspiracy led by Fannius Caepio against the Princeps. Murena was named among the conspirators. Tried in absentia, with Tiberius acting as prosecutor, the jury found the conspirators guilty, but it was not a unanimous verdict. Sentenced to death for treason, all the accused were executed as soon as they were captured without ever giving testimony in their defence. Augustus ensured that the facade of Republican government continued with an effective cover-up of the events.
In 19 BC, the Senate granted Augustus a form of 'general consular imperium'. Like his tribune authority, the granting of consular powers to him was another instance of holding power of offices he did not hold. In addition, Augustus was allowed to wear the consul's insignia in public and before the Senate, as well as sit in the symbolic chair between the two consuls and hold the fasces, an emblem of consular authority. This seems to have assuaged the populace; regardless of whether or not Augustus was a consul, the importance was that he appeared as one before the people. On 6 March 12 BC, after the death of Lepidus, he additionally took up the position of pontifex maximus, the high priest of the collegium of the Pontifices, the most important position in Roman religion. On 5 February 2 BC, Augustus was also given the title 'pater patriae', or 'father of the country'.
Augustus' powers were now complete. Later Roman Emperors would generally be limited to the powers and titles originally granted to Augustus, though often, to display humility, newly appointed Emperors would decline one or more of the honorifics given to Augustus. Just as often, as their reign progressed, Emperors would appropriate all of the titles, regardless of whether they had been granted them by the Senate. The civic crown, which later Emperors took to wearing, consular insignia, and later the purple robes of a Triumphant general ('toga picta') became the imperial insignia well into the Byzantine era.
War and expansion.
'Imperator Caesar Divi Filius Augustus' chose 'Imperator', 'victorious commander' to be his first name, since he wanted to make the notion of victory associated with him emphatically clear. By the year 13, Augustus boasted 21 occasions where his troops proclaimed 'imperator' as his title after a successful battle. Almost the entire fourth chapter in his publicly released memoirs of achievements known as the 'Res Gestae' was devoted to his military victories and honors.
Augustus also promoted the ideal of a superior Roman civilization with a task of ruling the world (the extent to which the Romans knew it), a sentiment embodied in words that the contemporary poet Virgil attributes to a legendary ancestor of Augustus: 'tu regere imperio populos, Romane, memento'—'Roman, remember by your strength to rule the Earth's peoples!' The impulse for expansionism, apparently prominent among all classes at Rome, is accorded divine sanction by Virgil's Jupiter, who in Book 1 of the 'Aeneid' promises Rome 'imperium sine fine', 'sovereignty without end'.
By the end of his reign, the armies of Augustus had conquered northern Hispania (modern Spain and Portugal), the Alpine regions of Raetia and Noricum (modern Switzerland, Bavaria, Austria, Slovenia), Illyricum and Pannonia (modern Albania, Croatia, Hungary, Serbia, etc.), and extended the borders of the Africa Province to the east and south.
After the reign of the client king Herod the Great (73–4 BC), Judea was added to the province of Syria when Augustus deposed his successor Herod Archelaus. Like Egypt which had been conquered after the defeat of Antony in 30 BC, Syria was governed not by a proconsul or legate of Augustus, but a high prefect of the equestrian class.
Again, no military effort was needed in 25 BC when Galatia (modern Turkey) was converted to a Roman province shortly after Amyntas of Galatia was killed by an avenging widow of a slain prince from Homonada. When the rebellious tribes of Cantabria in modern-day Spain were finally quelled in 19 BC, the territory fell under the provinces of Hispania and Lusitania. This region proved to be a major asset in funding Augustus' future military campaigns, as it was rich in mineral deposits that could be fostered in Roman mining projects, especially the very rich gold deposits at Las Medulas for example.
Conquering the peoples of the Alps in 16 BC was another important victory for Rome since it provided a large territorial buffer between the Roman citizens of Italy and Rome's enemies in Germania to the north. The poet Horace dedicated an ode to the victory, while the monument Trophy of Augustus near Monaco was built to honor the occasion. The capture of the Alpine region also served the next offensive in 12 BC, when Tiberius began the offensive against the Pannonian tribes of Illyricum and his brother Nero Claudius Drusus against the Germanic tribes of the eastern Rhineland. Both campaigns were successful, as Drusus' forces reached the Elbe River by 9 BC, yet he died shortly after by falling off his horse. It was recorded that the pious Tiberius walked in front of his brother's body all the way back to Rome.
To protect Rome's eastern territories from the Parthian Empire, Augustus relied on the client states of the east to act as territorial buffers and areas which could raise their own troops for defense. To ensure security of the Empire's eastern flank, Augustus stationed a Roman army in Syria, while his skilled stepson Tiberius negotiated with the Parthians as Rome's diplomat to the East. Tiberius was responsible for restoring Tigranes V to the throne of the Kingdom of Armenia.
Yet arguably his greatest diplomatic achievement was negotiating with Phraates IV of Parthia (37–2 BC) in 20 BC for the return of the battle standards lost by Crassus in the Battle of Carrhae, a symbolic victory and great boost of morale for Rome. Werner Eck claims that this was a great disappointment for Romans seeking to avenge Crassus' defeat by military means. However, Maria Brosius explains that Augustus used the return of the standards as propaganda symbolizing the submission of Parthia to Rome. The event was celebrated in art such as the breastplate design on the statue Augustus of Prima Porta and in monuments such as the Temple of Mars Ultor ('Mars the Avenger') built to house the standards.
Although Parthia always posed a threat to Rome in the east, the real battlefront was along the Rhine and Danube rivers. Before the final fight with Antony, Octavian's campaigns against the tribes in Dalmatia was the first step in expanding Roman dominions to the Danube. Victory in battle was not always a permanent success, as newly conquered territories were constantly retaken by Rome's enemies in Germania.
A prime example of Roman loss in battle was the Battle of Teutoburg Forest in AD 9, where three entire legions led by Publius Quinctilius Varus were destroyed with few survivors by Arminius, leader of the Cherusci, an apparent Roman ally. Augustus retaliated by dispatching Tiberius and Drusus to the Rhineland to pacify it, which had some success although the battle of AD 9 brought the end to Roman expansion into Germany. The Roman general Germanicus took advantage of a Cherusci civil war between Arminius and Segestes; they defeated Arminius, who fled that battle but was killed later in 21 due to treachery.
Death and succession.
The illness of Augustus in 23 BC brought the problem of succession to the forefront of political issues and the public. To ensure stability, he needed to designate an heir to his unique position in Roman society and government. This was to be achieved in small, undramatic, and incremental ways that did not stir senatorial fears of monarchy. If someone was to succeed his unofficial position of power, they were going to have to earn it through their own publicly proven merits.
Some Augustan historians argue that indications pointed toward his sister's son Marcellus, who had been quickly married to Augustus' daughter Julia the Elder. Other historians dispute this due to Augustus' will read aloud to the Senate while he was seriously ill in 23 BC, instead indicating a preference for Marcus Agrippa, who was Augustus' second in charge and arguably the only one of his associates who could have controlled the legions and held the Empire together.
After the death of Marcellus in 23 BC, Augustus married his daughter to Agrippa. This union produced five children, three sons and two daughters: Gaius Caesar, Lucius Caesar, Vipsania Julia, Agrippina the Elder, and Postumus Agrippa, so named because he was born after Marcus Agrippa died. Shortly after the Second Settlement, Agrippa was granted a five-year term of administering the eastern half of the Empire with the 'imperium' of a proconsul and the same 'tribunicia potestas' granted to Augustus (although not trumping Augustus' authority), his seat of governance stationed at Samos in the eastern Aegean. Although this granting of power would have shown Augustus' favor for Agrippa, it was also a measure to please members of his Caesarian party by allowing one of their members to share a considerable amount of power with him.
Augustus' intent to make Gaius and Lucius Caesar his heirs was apparent when he adopted them as his own children. He took the consulship in 5 and 2 BC so he could personally usher them into their political careers, and they were nominated for the consulships of AD 1 and 4. Augustus also showed favor to his stepsons, Livia's children from her first marriage, Nero Claudius Drusus Germanicus (henceforth referred to as Drusus) and Tiberius Claudius (henceforth Tiberius) granting them military commands and public office, though seeming to favor Drusus. After Agrippa died in 12 BC, Tiberius was ordered to divorce his own wife Vipsania and marry Agrippa's widow, Augustus' daughter Julia — as soon as a period of mourning for Agrippa had ended. While Drusus' marriage to Antonia was considered an unbreakable affair, Vipsania was 'only' the daughter of the late Agrippa from his first marriage.
Tiberius shared in Augustus' tribune powers as of 6 BC, but shortly thereafter went into retirement, reportedly wanting no further role in politics while he exiled himself to Rhodes. Although no specific reason is known for his departure, it could have been a combination of reasons, including a failing marriage with Julia, as well as a sense of envy and exclusion over Augustus' apparent favouring of his young grandchildren-turned-sons, Gaius and Lucius, who joined the college of priests at an early age, were presented to spectators in a more favorable light, and were introduced to the army in Gaul.
After the early deaths of both Lucius and Gaius in AD 2 and 4 respectively, and the earlier death of his brother Drusus (9 BC), Tiberius was recalled to Rome in June AD 4, where he was adopted by Augustus on the condition that he, in turn, adopt his nephew Germanicus. This continued the tradition of presenting at least two generations of heirs. In that year, Tiberius was also granted the powers of a tribune and proconsul, emissaries from foreign kings had to pay their respects to him, and by 13 was awarded with his second triumph and equal level of 'imperium' with that of Augustus.
The only other possible claimant as heir was Postumus Agrippa, who had been exiled by Augustus in AD 7, his banishment made permanent by senatorial decree, and Augustus officially disowned him. He certainly fell out of Augustus' favor as an heir; the historian Erich S. Gruen notes various contemporary sources that state Postumus Agrippa was a 'vulgar young man, brutal and brutish, and of depraved character.' Postumus Agrippa was murdered at his place of exile either shortly before or after the death of Augustus.
On 19 August AD 14, Augustus died while visiting the place of his birth father's death at Nola. Both Tacitus and Cassius Dio wrote that Livia brought about Augustus' death by poisoning fresh figs, though this allegation remains unproven. Tiberius, who was present alongside Livia at Augustus' deathbed, was named his heir. Augustus' famous last words were, 'Have I played the part well? Then applaud as I exit'—referring to the play-acting and regal authority that he had put on as emperor. Publicly, though, his last words were, 'Behold, I found Rome of clay, and leave her to you of marble.' An enormous funerary procession of mourners traveled with Augustus' body from Nola to Rome, and on the day of his burial all public and private businesses closed for the day.
Tiberius and his son Drusus delivered the eulogy while standing atop two 'rostra'. Coffin-bound, Augustus' body was cremated on a pyre close to his mausoleum. It was proclaimed that Augustus joined the company of the gods as a member of the Roman pantheon. In 410, during the Sack of Rome, the mausoleum was despoiled by the Goths and his ashes scattered.
The historian D.C.A. Shotter states that Augustus' policy of favoring the Julian family line over the Claudian might have afforded Tiberius sufficient cause to show open disdain for Augustus after the latter's death; instead, Tiberius was always quick to rebuke those who criticized Augustus. Shotter suggests that Augustus' deification, coupled with Tiberius' 'extremely conservative' attitude towards religion, obliged Tiberius to suppress any open resentment he might have harbored.
Also, the historian R. Shaw-Smith points to letters of Augustus to Tiberius which display affection towards Tiberius and high regard for his military merits. Shotter states that Tiberius focused his anger and criticism on Gaius Asinius Gallus (for marrying Vipsania after Augustus forced Tiberius to divorce her) as well as the two young Caesars Gaius and Lucius, instead of Augustus, the real architect of his divorce and imperial demotion.
Legacy.
Augustus' reign laid the foundations of a regime that lasted for nearly fifteen hundred years through the ultimate decline of the Western Roman Empire and until the Fall of Constantinople in 1453. Both his adoptive surname, Caesar, and his title 'Augustus' became the permanent titles of the rulers of the Roman Empire for fourteen centuries after his death, in use both at Old Rome and at New Rome. In many languages, 'Caesar' became the word for 'Emperor', as in the German 'Kaiser' and in the Bulgarian and subsequently Russian 'Tsar'. The cult of 'Divus Augustus' continued until the state religion of the Empire was changed to Christianity in 391 by Theodosius I. Consequently, there are many excellent statues and busts of the first emperor. He had composed an account of his achievements, the 'Res Gestae Divi Augusti', to be inscribed in bronze in front of his mausoleum. Copies of the text were inscribed throughout the Empire upon his death. The inscriptions in Latin featured translations in Greek beside it, and were inscribed on many public edifices, such as the temple in Ankara dubbed the 'Monumentum Ancyranum', called the 'queen of inscriptions' by historian Theodor Mommsen.
There are a few known written works by Augustus that have survived. This includes his poems 'Sicily', 'Epiphanus', and 'Ajax', an autobiography of 13 books, a philosophical treatise, and his written rebuttal to Brutus' 'Eulogy of Cato'. However, historians are able to analyze existing letters penned by Augustus to others for additional facts or clues about his personal life.
Many consider Augustus to be Rome's greatest emperor; his policies certainly extended the Empire's life span and initiated the celebrated 'Pax Romana' or 'Pax Augusta'. The Roman Senate wished subsequent emperors to 'be more fortunate than Augustus and better than Trajan'. Augustus was intelligent, decisive, and a shrewd politician, but he was not perhaps as charismatic as Julius Caesar, and was influenced on occasion by his third wife, Livia (sometimes for the worse). Nevertheless, his legacy proved more enduring. The city of Rome was utterly transformed under Augustus, with Rome's first institutionalized police force, fire fighting force, and the establishment of the municipal prefect as a permanent office. The police force was divided into cohorts of 500 men each, while the units of firemen ranged from 500 to 1,000 men each, with 7 units assigned to 14 divided city sectors.
A 'praefectus vigilum', or 'Prefect of the Watch' was put in charge of the vigiles, Rome's fire brigade and police. With Rome's civil wars at an end, Augustus was also able to create a standing army for the Roman Empire, fixed at a size of 28 legions of about 170,000 soldiers. This was supported by numerous auxiliary units of 500 soldiers each, often recruited from recently conquered areas.
With his finances securing the maintenance of roads throughout Italy, Augustus also installed an official courier system of relay stations overseen by a military officer known as the 'praefectus vehiculorum'. Besides the advent of swifter communication amongst Italian polities, his extensive building of roads throughout Italy also allowed Rome's armies to march swiftly and at an unprecedented pace across the country. In the year 6 Augustus established the 'aerarium militare', donating 170 million sesterces to the new military treasury that provided for both active and retired soldiers.
One of the most enduring institutions of Augustus was the establishment of the Praetorian Guard in 27 BC, originally a personal bodyguard unit on the battlefield that evolved into an imperial guard as well as an important political force in Rome. They had the power to intimidate the Senate, install new emperors, and depose ones they disliked; the last emperor they served was Maxentius, as it was Constantine I who disbanded them in the early 4th century and destroyed their barracks, the Castra Praetoria.
Although the most powerful individual in the Roman Empire, Augustus wished to embody the spirit of Republican virtue and norms. He also wanted to relate to and connect with the concerns of the plebs and lay people. He achieved this through various means of generosity and a cutting back of lavish excess. In the year 29 BC, Augustus paid 400 sesterces each to 250,000 citizens, 1,000 sesterces each to 120,000 veterans in the colonies, and spent 700 million sesterces in purchasing land for his soldiers to settle upon. He also restored 82 different temples to display his care for the Roman pantheon of deities. In 28 BC, he melted down 80 silver statues erected in his likeness and in honor of him, an attempt of his to appear frugal and modest.
The longevity of Augustus' reign and its legacy to the Roman world should not be overlooked as a key factor in its success. As Tacitus wrote, the younger generations alive in AD 14 had never known any form of government other than the Principate. Had Augustus died earlier (in 23 BC, for instance), matters might have turned out differently. The attrition of the civil wars on the old Republican oligarchy and the longevity of Augustus, therefore, must be seen as major contributing factors in the transformation of the Roman state into a de facto monarchy in these years. Augustus' own experience, his patience, his tact, and his political acumen also played their parts. He directed the future of the Empire down many lasting paths, from the existence of a standing professional army stationed at or near the frontiers, to the dynastic principle so often employed in the imperial succession, to the embellishment of the capital at the emperor's expense. Augustus' ultimate legacy was the peace and prosperity the Empire enjoyed for the next two centuries under the system he initiated. His memory was enshrined in the political ethos of the Imperial age as a paradigm of the good emperor. Every Emperor of Rome adopted his name, Caesar Augustus, which gradually lost its character as a name and eventually became a title. The Augustan era poets Virgil and Horace praised Augustus as a defender of Rome, an upholder of moral justice, and an individual who bore the brunt of responsibility in maintaining the empire.
However, for his rule of Rome and establishing the principate, Augustus has also been subjected to criticism throughout the ages. The contemporary Roman jurist Marcus Antistius Labeo (d. AD 10/11), fond of the days of pre-Augustan republican liberty in which he had been born, openly criticized the Augustan regime. In the beginning of his 'Annals', the Roman historian Tacitus (c. 56–c.117) wrote that Augustus had cunningly subverted Republican Rome into a position of slavery. He continued to say that, with Augustus' death and swearing of loyalty to Tiberius, the people of Rome simply traded one slaveholder for another. Tacitus, however, records two contradictory but common views of Augustus:
According to the second opposing opinion:
In a recent biography on Augustus, Anthony Everitt asserts that through the centuries, judgments on Augustus' reign have oscillated between these two extremes but stresses that:
Tacitus was of the belief that Nerva (r. 96–98) successfully 'mingled two formerly alien ideas, principate and liberty.' The 3rd-century historian Cassius Dio acknowledged Augustus as a benign, moderate ruler, yet like most other historians after the death of Augustus, Dio viewed Augustus as an autocrat. The poet Marcus Annaeus Lucanus (AD 39–65) was of the opinion that Caesar's victory over Pompey and the fall of Cato the Younger (95 BC–46 BC) marked the end of traditional liberty in Rome; historian Chester G. Starr, Jr. writes of his avoidance of criticizing Augustus, 'perhaps Augustus was too sacred a figure to accuse directly.'
The Anglo-Irish writer Jonathan Swift (1667–1745), in his 'Discourse on the Contests and Dissentions in Athens and Rome', criticized Augustus for installing tyranny over Rome, and likened what he believed Great Britain's virtuous constitutional monarchy to Rome's moral Republic of the 2nd century BC. In his criticism of Augustus, the admiral and historian Thomas Gordon (1658–1741) compared Augustus to the puritanical tyrant Oliver Cromwell (1599–1658). Thomas Gordon and the French political philosopher Montesquieu (1689–1755) both remarked that Augustus was a coward in battle. In his 'Memoirs of the Court of Augustus', the Scottish scholar Thomas Blackwell (1701–1757) deemed Augustus a Machiavellian ruler, 'a bloodthirsty vindicative usurper', 'wicked and worthless', 'a mean spirit', and a 'tyrant'.
Revenue reforms.
Augustus' public revenue reforms had a great impact on the subsequent success of the Empire. Augustus brought a far greater portion of the Empire's expanded land base under consistent, direct taxation from Rome, instead of exacting varying, intermittent, and somewhat arbitrary tributes from each local province as Augustus' predecessors had done. This reform greatly increased Rome's net revenue from its territorial acquisitions, stabilized its flow, and regularized the financial relationship between Rome and the provinces, rather than provoking fresh resentments with each new arbitrary exaction of tribute.
The measures of taxation in the reign of Augustus were determined by population census, with fixed quotas for each province. Citizens of Rome and Italy paid indirect taxes, while direct taxes were exacted from the provinces. Indirect taxes included a 4% tax on the price of slaves, a 1% tax on goods sold at auction, and a 5% tax on the inheritance of estates valued at over 100,000 sesterces by persons other than the next of kin.
An equally important reform was the abolition of private tax farming, which was replaced by salaried civil service tax collectors. Private contractors that raised taxes had been the norm in the Republican era, and some had grown powerful enough to influence the amount of votes for politicians in Rome. The tax farmers had gained great infamy for their depredations, as well as great private wealth, by winning the right to tax local areas.
Rome's revenue was the amount of the successful bids, and the tax farmers' profits consisted of any additional amounts they could forcibly wring from the populace with Rome's blessing. Lack of effective supervision, combined with tax farmers' desire to maximize their profits, had produced a system of arbitrary exactions that was often barbarously cruel to taxpayers, widely (and accurately) perceived as unfair, and very harmful to investment and the economy.
The use of Egypt's immense land rents to finance the Empire's operations resulted from Augustus' conquest of Egypt and the shift to a Roman form of government. As it was effectively considered Augustus' private property rather than a province of the Empire, it became part of each succeeding emperor's patrimonium. Instead of a legate or proconsul, Augustus installed a prefect from the equestrian class to administer Egypt and maintain its lucrative seaports; this position became the highest political achievement for any equestrian besides becoming Prefect of the Praetorian Guard. The highly productive agricultural land of Egypt yielded enormous revenues that were available to Augustus and his successors to pay for public works and military expeditions, as well as bread and circuses for the population of Rome.
Month of August.
The month of August (Latin: 'Augustus') is named after Augustus; until his time it was called Sextilis (named so because it had been the sixth month of the original Roman calendar and the Latin word for six is 'sex'). Commonly repeated lore has it that August has 31 days because Augustus wanted his month to match the length of Julius Caesar's July, but this is an invention of the 13th century scholar Johannes de Sacrobosco. Sextilis in fact had 31 days before it was renamed, and it was not chosen for its length (see Julian calendar). According to a 'senatus consultum' quoted by Macrobius, Sextilis was renamed to honor Augustus because several of the most significant events in his rise to power, culminating in the fall of Alexandria, fell in that month.
Building projects.
On his deathbed, Augustus boasted 'I found a Rome of bricks; I leave to you one of marble'. Although there is some truth in the literal meaning of this, Cassius Dio asserts that it was a metaphor for the Empire's strength. Marble could be found in buildings of Rome before Augustus, but it was not extensively used as a building material until the reign of Augustus.
Although this did not apply to the Subura slums, which were still as rickety and fire-prone as ever, he did leave a mark on the monumental topography of the centre and of the Campus Martius, with the Ara Pacis (Altar of Peace) and monumental sundial, whose central gnomon was an obelisk taken from Egypt. The relief sculptures decorating the Ara Pacis visually augmented the written record of Augustus' triumphs in the 'Res Gestae'. Its reliefs depicted the imperial pageants of the praetorians, the Vestals, and the citizenry of Rome.
He also built the Temple of Caesar, the Baths of Agrippa, and the Forum of Augustus with its Temple of Mars Ultor. Other projects were either encouraged by him, such as the Theatre of Balbus, and Agrippa's construction of the Pantheon, or funded by him in the name of others, often relations (e.g. Portico of Octavia, Theatre of Marcellus). Even his Mausoleum of Augustus was built before his death to house members of his family.
To celebrate his victory at the Battle of Actium, the Arch of Augustus was built in 29 BC near the entrance of the Temple of Castor and Pollux, and widened in 19 BC to include a triple-arch design. There are also many buildings outside of the city of Rome that bear Augustus' name and legacy, such as the Theatre of Mérida in modern Spain, the Maison Carrée built at Nîmes in today's southern France, as well as the Trophy of Augustus at La Turbie, located near Monaco.
After the death of Agrippa in 12 BC, a solution had to be found in maintaining Rome's water supply system. This came about because it was overseen by Agrippa when he served as aedile, and was even funded by him afterwards when he was a private citizen paying at his own expense. In that year, Augustus arranged a system where the Senate designated three of its members as prime commissioners in charge of the water supply and to ensure that Rome's aqueducts did not fall into disrepair.
In the late Augustan era, the commission of five senators called the 'curatores locorum publicorum iudicandorum' (translated as 'Supervisors of Public Property') was put in charge of maintaining public buildings and temples of the state cult. Augustus created the senatorial group of the 'curatores viarum' (translated as 'Supervisors for Roads') for the upkeep of roads; this senatorial commission worked with local officials and contractors to organize regular repairs.
The Corinthian order of architectural style originating from ancient Greece was the dominant architectural style in the age of Augustus and the imperial phase of Rome. Suetonius once commented that Rome was unworthy of its status as an imperial capital, yet Augustus and Agrippa set out to dismantle this sentiment by transforming the appearance of Rome upon the classical Greek model.
Physical appearance and official images.
His biographer Suetonius, writing about a century after Augustus' death, described his appearance as: '.. unusually handsome and exceedingly graceful at all periods of his life, though he cared nothing for personal adornment. He was so far from being particular about the dressing of his hair, that he would have several barbers working in a hurry at the same time, and as for his beard he now had it clipped and now shaved, while at the very same time he would either be reading or writing something .. He had clear, bright eyes .. His teeth were wide apart, small, and ill-kept; his hair was slightly curly and inclining to golden; his eyebrows met. His ears were of moderate size, and his nose projected a little at the top and then bent ever so slightly inward. His complexion was between dark and fair. He was short of stature (although Julius Marathus, his freedman and keeper of his records, says that he was five feet and nine inches in height), but this was concealed by the fine proportion and symmetry of his figure, and was noticeable only by comparison with some taller person standing beside him. .. '
His official images were very tightly controlled and idealized, drawing from a tradition of Hellenistic royal portraiture rather than the tradition of realism in Roman portraiture. He first appeared on coins at the age of 19, and from about 29 BC 'the explosion in the number of Augustan portraits attests a concerted propaganda campaign aimed at dominating all aspects of civil, religious, economic and military life with Augustus' person'. The early images did indeed depict a young man, but although there were gradual changes his images remained youthful until he died in his seventies, by which time they had 'a distanced air of ageless majesty'. Among the best known of many surviving portraits are the Augustus of Prima Porta, the image on the Ara Pacis, and the Via Labicana Augustus, which shows him as a priest. Several cameo portraits include the Blacas Cameo and 'Gemma Augustea'.
Ancestry.
Ancestors of Augustus
Descendants.
Augustus' only biological (non-adopted) child was his daughter.
External links.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1274'>
Geography of Antarctica
The geography of Antarctica is dominated by its south polar location and, thus, by ice. The Antarctic continent, located in the Earth's southern hemisphere, is centered asymmetrically around the South Pole and largely south of the Antarctic Circle. It is surrounded by the southern waters of the World Ocean – alternatively (depending on source), it is washed by the Southern (or Antarctic) Ocean or the southern Pacific, Atlantic, and Indian Oceans. It has an area of more than 14 million km².
Some 98% of Antarctica is covered by the Antarctic ice sheet, the world's largest ice sheet and also its largest reservoir of fresh water. Averaging at least 1.6 km thick, the ice is so massive that it has depressed the continental bedrock in some areas more than 2.5 km below sea level; subglacial lakes of liquid water also occur (e.g., Lake Vostok). Ice shelves and rises populate the ice sheet on the periphery.
Regions.
Physically, Antarctica is divided in two by Transantarctic Mountains close to the neck between the Ross Sea and the Weddell Sea. Western Antarctica and Eastern Antarctica correspond roughly to the eastern and western hemispheres relative to the Greenwich meridian. This usage has been regarded as Eurocentric by some, and the alternative terms Lesser Antarctica and Greater Antarctica (respectively) are sometimes preferred.
Western Antarctica is covered by the West Antarctic Ice Sheet. There has been some concern about this ice sheet, because there is a small chance that it will collapse. If it does, ocean levels would rise by a few metres in a very short period of time.
Volcanoes.
There are four volcanoes on the mainland of Antarctica that are
considered to be active on the basis of observed fumarolic activity or
'recent' tephra deposits:
Mount Melbourne (2,730 m) (74°21'S., 164°42'E.), a stratovolcano;
Mount Berlin (3,500 m) (76°03'S., 135°52'W.), a stratovolcano;
Mount Kauffman (2,365 m) (75°37'S., 132°25'W.), a stratovolcano; and
Mount Hampton (3,325 m) (76°29'S., 125°48'W.), a volcanic caldera.
Several volcanoes on offshore islands have records of historic activity.
Mount Erebus (3,795 m), a stratovolcano on
Ross Island with 10 known eruptions and 1 suspected eruption.
On the opposite side of the continent,
Deception Island
(62°57'S., 60°38'W.), a volcanic caldera with 10 known
and 4 suspected eruptions, have been the most active.
Buckle Island in the Balleny Islands (66°50'S., 163°12'E.),
Penguin Island (62°06'S., 57°54'W.),
Paulet Island (63°35'S., 55°47'W.), and
Lindenberg Island (64°55'S., 59°40'W.) are also
considered to be active.
West Antarctica.
West Antarctica is the smaller part of the continent, divided into:
Ice shelfs.
Larger ice shelfs are:
For all ice shelfs see List of Antarctic ice shelves.
Islands.
For a list of all Antarctic islands see List of Antarctic and sub-Antarctic islands.
East Antarctica.
East Antarctica is the larger part of the continent, both the South Magnetic Pole and geographic South Pole are situated here. Divided into:
Ice shelfs.
Larger ice shelfs are:
For all ice shelfs see List of Antarctic ice shelves.
Islands.
For a list of all Antarctic islands see List of Antarctic and sub-Antarctic islands.
Territorial landclaims.
Seven nations have made official Territorial claims in Antarctica.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1279'>
Transport in Antarctica
Transport in Antarctica has transformed from explorers crossing the isolated remote area of Antarctica by foot to a more open area due to human technologies enabling more convenient and faster transport, predominantly by air and water, as well as land.
Transportation technologies on a remote area like Antarctica need to be able to deal with extremely low temperatures and continuous winds to ensure the travelers' safety. Due to the fragility of the Antarctic environment, only a limited amount of transport movements can take place and sustainable transportation technologies have to be used to reduce the ecological footprint.
The infrastructure of land, water and air transport needs to be safe and sustainable.
Currently thousands of tourists and hundreds of scientists a year rely on the Antarctic transportation system.
Land transport.
Mawson Station started using classic Volkswagen Beetles, the first production cars to be used in Antarctica. The first of these was named 'Antarctica 1'. However, the scarcity and poor quality of road infrastructure limits land transportation by conventional vehicles. Winds continuously blow snow on the roads. The McMurdo – South Pole Highway is a 900-mile (1450 km) road in Antarctica linking the United States McMurdo Station on the coast to the Amundsen–Scott South Pole Station.
In 2006 a team of six people took part in the Ice Challenger Expedition. Travelling in a specially designed six wheel drive vehicle, the team completed the journey from the Antarctic coast at Patriot Hills to the geographic South Pole in 69 hours. In doing so they easily beat the previous record of 24 days. They arrived at the South Pole on the 12th of December 2005.
The team members on that expedition were Andrew Regan, Jason De Carteret, Andrew Moon, Richard Griffiths, Gunnar Egilsson and Andrew Miles. The expedition successfully showed that wheeled transport on the continent is not only possible but also often more practical. The expedition also hoped to raise awareness about global warming and climate change.
A second expedition led by Andrew Regan and Andrew Moon departed in November 2010. The Moon-Regan Trans Antarctic Expedition this time traversed the entire continent twice, using 2 six wheel drive vehicles and a Concept Ice Vehicle designed by Lotus. This time the team used the expedition to raise awareness about the global environmental importance of the Antarctic region and to show that biofuel can be a viable and environmentally friendly option.
Water transport.
Antarctica's only harbour is at McMurdo Station. Most coastal stations have offshore anchorages, and supplies are transferred from ship to shore by small boats, barges, and helicopters. A few stations have a basic wharf facility. All ships at port are subject to inspection in accordance with Article 7, Antarctic Treaty. Offshore anchorage is sparse and intermittent, but poses no problem to sailboats designed for the ice, typically with lifting keels and long shorelines.
McMurdo Station (), Palmer Station (); government use only except by permit (see Permit Office under 'Legal System'). A number of tour boats, ranging from large motorized vessels to small sailing yachts, visit the Antarctic Peninsula during the summer months (January–March). Most are based in Ushuaia, Argentina.
Air transport.
Transport in Antarctica takes place by air, using aeroplanes and helicopters.
Aeroplane runways and helicopter pads have to be kept snow free to ensure safe take off and landing conditions.
Antarctica has 20 airports, but there are no developed public-access airports or landing facilities. Thirty stations, operated by 16 national governments party to the Antarctic Treaty, have landing facilities for either helicopters and/or fixed-wing aircraft; commercial enterprises operate two additional air facilities.
Helicopter pads are available at 27 stations; runways at 15 locations are gravel, sea-ice, blue-ice, or compacted snow suitable for landing wheeled, fixed-wing aircraft; of these, 1 is greater than 3 km in length, 6 are between 2 km and 3 km in length, 3 are between 1 km and 2 km in length, 3 are less than 1 km in length, and 2 are of unknown length; snow surface skiways, limited to use by ski-equipped, fixed-wing aircraft, are available at another 15 locations; of these, 4 are greater than 3 km in length, 3 are between 2 km and 3 km in length, 2 are between 1 km and 2 km in length, 2 are less than 1 km in length, and data is unavailable for the remaining 4.
Antarctic airports are subject to severe restrictions and limitations resulting from extreme seasonal and geographic conditions; they do not meet ICAO standards, and advance approval from the respective governmental or nongovernmental operating organization is required for landing (1999 est.) Flights to the continent in the permanent darkness of the winter are normally only undertaken in an emergency, with burning barrels of fuel to outline a runway. On September 11, 2008, a United States Air Force C-17 Globemaster III successfully completed the first landing in Antarctica using night-vision goggles at Pegasus Field.
In April 2001 an emergency evacuation of Dr. Ronald Shemenski was needed from Amundsen–Scott South Pole Station when he contracted pancreatitis. Three C-130 Hercules were called back before their final leg because of weather. Organizers then called on Kenn Borek Air based in Calgary. Two de Havilland Twin Otters were dispatched out of Calgary with one being back-up. Twin Otters are specifically designed for the Canadian north and Kenn Borek Air's motto is 'Anywhere, Anytime, World-Wide.' The mission was a success but not without difficulties and drawbacks. Ground crews needed to create a 2 km runway with tracked equipment not designed to operate in the low temperatures at that time of year, the aircraft controls had to be 'jerry-rigged' when the flaps were frozen in position after landing, and instruments where not reliable because of the cold. When they saw a 'faint pink line on the horizon' they knew they were going in the right direction. This was the first rescue from the South Pole during polar winter. Canada honoured the Otter crew for bravery.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1285'>
Geography of Alabama
This article will go through a wide range of topics of the geography of the state of Alabama. It is 30th in size and borders four U.S. states: Mississippi, Tennessee, Georgia, and Florida. It also borders the Gulf of Mexico.
Physical features.
Extending entirely across the state of Alabama for about northern boundary, and in the middle stretching farther south, is the Cumberland Plateau, or Tennessee Valley region, broken into broad tablelands by the dissection of rivers. In the northern part of this plateau, west of Jackson county, there are about of level highlands from above sea level. South of these highlands, occupying a narrow strip on each side of the Tennessee River, is a country of gentle rolling lowlands varying in elevation from . To the northeast of these highlands and lowlands is a rugged section with steep mountain-sides, deep narrow coves and valleys, and flat mountain-tops. Its elevations range from . In the remainder of this region, the southern portion, the most prominent feature is 'Little Mountain', extending about from east to west between two valleys, and rising precipitously on the north side above them or above the sea.
Adjoining the Cumberland Plateau region on the southeast is the Appalachian Valley (locally known as Coosa Valley) region, which is the southern extremity of the Appalachian Mountains, and occupies an area within the state of about . This is a limestone belt with parallel hard rock ridges left standing by erosion to form mountains. Although the general direction of the mountains, ridges, and valleys is northeast and southwest, irregularity is one of the most prominent characteristics. In the northeast are several flat-topped mountains, of which Raccoon and Lookout are the most prominent, having a maximum elevation near the Georgia line of little more than and gradually decreasing in height toward the southwest, where Sand Mountain is a continuation of Raccoon. South of these the mountains are marked by steep northwest sides, sharp crests and gently sloping southeast sides.
Southeast of the Appalachian Valley region, the Piedmont Plateau also crosses the Alabama border from the N.E. and occupies a small triangular-shaped section of which Randolph and Clay counties, together with the northern part of Tallapoosa and Chambers, form the principal portion. Its surface is gently undulating and has an elevation of about above sea level. The Piedmont Plateau is a lowland worn down by erosion on hard crystalline rocks, then uplifted to form a plateau.
The remainder of the state is occupied by the 'Coastal Plain'. This is crossed by foot-hills and rolling prairies in the central part of the state, where it has a mean elevation of about , becomes lower and more level toward the southwest, and in the extreme south is flat and but slightly elevated above the sea.
The Cumberland Plateau region is drained to the west-northwest by the Tennessee River and its tributaries; all other parts of the state are drained to the southwest. In the Appalachian Valley region the Coosa River is the principal river; and in the Piedmont Plateau, the Tallapoosa River. In the Coastal Plain are the Tombigbee River in the west, the Alabama River (formed by the Coosa and Tallapoosa) in the western central, and in the east the Chattahoochee River, which forms almost half of the Georgia boundary. The Tombigbee and Alabama rivers unite near the southwest corner of the state, their waters discharging into Mobile Bay by the Mobile and Tensas rivers. The Black Warrior River is a considerable stream which joins the Tombigbee from the east.
The valleys in the north and northeast are usually deep and narrow, but in the Coastal Plain they are broad and in most cases rise in three successive terraces above the stream. The harbour of Mobile was formed by the drowning of the lower part of the valley of the Alabama and Tombigbee rivers as a result of the sinking of the land here, such sinking having occurred on other parts of the Gulf coast.
Recources:
Flora and fauna.
The fauna and flora of Alabama are similar to those of the Gulf states in general and have no distinctive characteristics. However, the Mobile River system has a high incidence of endemism among freshwater mollusks and biodiversity is high.
In Alabama, vast forests of pine constitute the largest proportion of the state's forest growth. There is also an abundance of cypress, hickory, oak, populus, and eastern redcedar trees. In other areas, hemlock growths in the north and southern white cedar in the southwest. Other native trees include ash, hackberry, and holly. In the Gulf region of the state grow various species of palmetto and palm. In Alabama there are more than 150 shrubs, including mountain laurel and rhododendron. Among cultivated plants are wisteria and camellia.
While in the past the state enjoyed a variety of mammals such as plains bison, eastern elk, North American cougar, bear, and deer, only the white-tailed deer remains abundant. Still fairly common are the bobcat, American beaver, muskrat, raccoon, Virginia opossum, rabbit, squirrel, red and gray foxes, and long-tailed weasel. Coypu and nine-banded armadillo have been introduced to the state and now also common.
Alabama’s birds include golden and bald eagles, osprey and other hawks, yellow-shafted flickers, and black-and-white warblers. Game birds include bobwhite quail, duck, wild turkey, and goose. Freshwater fish such as bream, shad, bass, and sucker are common. Along the Gulf Coast there are seasonal runs of tarpon, pompano, red drum, and bonito.
The U.S. Fish and Wildlife Service lists as endangered 99 animals, fish, and birds, and 18 plant species. The endangered animals include the Alabama beach mouse, gray bat, Alabama red-bellied turtle, fin and humpback whales, bald eagle, and wood stork.
American black bear, racking horse, yellow-shafted flicker, wild turkey, Atlantic tarpon, largemouth bass, southern longleaf pine, eastern tiger swallowtail, monarch butterfly, Alabama red-bellied turtle, Red Hills salamander, camellia, oak-leaf hydrangea, peach, pecan, and blackberry are Alabama's state symbols.
Climate and soil.
The climate of Alabama is humid subtropical.
The heat of summer is tempered in the south by the winds from the Gulf of Mexico, and in the north by the elevation above the sea. The average annual temperature is highest in the southwest along the coast, and lowest in the northeast among the highlands. Thus at Mobile the annual mean is 67 °F (19 °C), the mean for the summer 81 °F (27 °C), and for the winter 52 °F (11 °C); and at Valley Head, in De Kalb county, the annual mean is 59 °F (15 °C), the mean for the summer 75 °F (24 °C), and for the winter 41 °F (5 °C). At Montgomery, in the central region, the average annual temperature is 66 °F (19 °C), with a winter average of 49 °F (9 °C), and a summer average of 81 °F (27 °C). The average winter minimum for the entire state is 35 °F (2 °C), and there is an average of 35 days in each year in which the thermometer falls below the freezing-point. At extremely rare intervals the thermometer has fallen below zero (-18 °C), as was the case in the remarkable cold wave of the 12th-13 February 1899, when an absolute minimum of -17 °F (-29 °C) was registered at Valley Head. The highest temperature ever recorded was 109 °F (43 °C) in Talladega county in 1902.
The amount of precipitation is greatest along the coast (62 inches/1,574 mm) and evenly distributed through the rest of the state (about 52 inches/1,320 mm). During each winter there is usually one fall of snow in the south and two in the north; but the snow quickly disappears, and sometimes, during an entire winter, the ground is not covered with snow. Heavy snowfall can occur, such as during the New Year's Eve 1963 snowstorm and the 1993 Storm of the Century. Hailstorms occur occasionally in the spring and summer, but are seldom destructive. Heavy fogs are rare, and are confined chiefly to the coast. Thunderstorms occur throughout the year - they are most common in the summer, but most severe in the spring and fall, when destructive winds and tornadoes occasionally occur. The prevailing winds are from the news. Hurricanes are quite common in the state, especially in the southern part, and major hurricanes occasionally strike the coast which can be very destructive.
As regards its soil, Alabama may be divided into four regions. Extending from the Gulf northward for about 150 miles (240 km) is the outer belt of the Coastal Plain, also called the 'Timber Belt,' whose soil is sandy and poor, but responds well to fertilization. North of this is the inner lowland of the Coastal Plain, or the 'Black Prairie,' which includes some and seventeen counties. It receives its name from its soil (weathered from the weak underlying limestone), which is black in colour, almost destitute of sand and loam, and rich in limestone and marl formations, especially adapted to the production of cotton; hence the region is also called the 'Cotton Belt.' Between the 'Cotton Belt' and the Tennessee Valley is the mineral region, the 'Old Land' area—a region of resistant rocks—whose soils, also derived from weathering in silu, are of varied fertility, the best coming from the granites, sandstones and limestones, the poorest from the gneisses, schists and slates. North of the mineral region is the 'Cereal Belt,' embracing the Tennessee Valley and the counties beyond, whose richest soils are the red clays and dark loams of the river valley; north of which are less fertile soils, produced by siliceous and sandstone formations.
Wetumpka Meteor Crater.
Wetumpka is the home of 'Alabama's greatest natural disaster.' A -wide meteorite hit the area about 80 million years ago. The hills just east of downtown showcase the eroded remains of the five mile (8 km) wide impact crater that was blasted into the bedrock, with the area labeled the Wetumpka crater or astrobleme ('star-wound') for the concentric rings of fractures and zones of shattered rock can be found beneath the surface. In 2002, Christian Koeberl with the Institute of Geochemistry University of Vienna published evidence and established the site as an internationally recognized impact crater.
Public lands.
Alabama includes several types of public use lands. These include four national forests and one national preserve within state borders that provide over 25% of the state's public recreation land.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1286'>
List of Governors of Alabama
The Governor of Alabama is the chief executive of the U.S. state of Alabama. The governor is the head of the executive branch of Alabama's state government and is charged with enforcing state laws. The governor has the power to either approve or veto bills passed by the Alabama Legislature, to convene the legislature, and to grant pardons, except in cases of impeachment. The governor is also the commander-in-chief of the state's military forces.
There have officially been 53 governors of the state of Alabama; this official numbering skips acting and military governors. In addition, the first governor, William Wyatt Bibb, served as the only governor of Alabama Territory. Five people have served as acting governor, bringing the total number of people serving as governor to 58, spread over 63 distinct terms. Four governors have served multiple non-consecutive terms: Bibb Graves, Jim Folsom, and Fob James each served two, and George Wallace served three non-consecutive periods. Officially, these non-consecutive terms are numbered only with the number of their first term. William D. Jelks also served non-consecutive terms, but his first term was in an acting capacity. The longest-serving governor was George Wallace, who served sixteen years over four terms. The shortest term for a non-acting governor was that of Hugh McVay, who served four and a half months after replacing the resigning Clement Comer Clay. Lurleen Wallace, wife of George Wallace, was the first and so far only woman to serve as governor of Alabama, and the third woman to serve as governor of any state. The current governor is Republican Robert J. Bentley, who took office on January 17, 2011.
Governors.
Governor of the Territory of Alabama.
Alabama Territory was formed on March 3, 1817, from Mississippi Territory. It had only one governor appointed by the President of the United States before it became a state; he became the first state governor.
Governors of the State of Alabama.
Alabama was admitted to the Union on December 14, 1819. It seceded from the Union on January 11, 1861 and was a founding member of the Confederate States of America on February 4, 1861; there was no Union government in exile, so there was a single line of governors. Following the end of the American Civil War during Reconstruction, it was part of the Third Military District, which exerted some control over governor appointments and elections. Alabama was readmitted to the Union on July 14, 1868.
The first Alabama Constitution, ratified in 1819, provided that a governor be elected every two years, limited to serve no more than four out of every six years. This limit remained in place until the constitution of 1868, which simply allowed governors to serve terms of two years. The current constitution of 1901 increased terms to four years, but prohibited governors from succeeding themselves. Amendment 282 to the constitution, passed in 1968, allowed governors to succeed themselves once. The constitution had no set date for the commencement of a governor's term until 1901, when it was set at the first Monday after the second Tuesday in the January following an election.
The office of lieutenant governor was created in 1868, abolished in 1875, and recreated in 1901. According to the current constitution, should the governor be out of the state for more than 20 days, the lieutenant governor becomes acting governor, and if the office of governor becomes vacant the lieutenant governor fully becomes governor. Earlier constitutions said the powers of the governor devolved upon the successor, rather than them necessarily becoming governor, but the official listing includes these as full governors. The governor and lieutenant governor are not elected on the same ticket.
Alabama was a strongly Democratic state before the Civil War, electing only candidates from the Democratic-Republican and Democratic parties. It had two Republican governors following Reconstruction, but after the Democratic Party re-established control, 112 years passed before voters chose another Republican.
Other high offices held.
Eighteen of Alabama's governors have served higher federal or confederate offices. All but three were elected to the U.S. Congress, although one of those represented only Georgia. The remaining three served in the confederate government, two as members of the Provisional Confederate Congress, and one was the Confederate States Attorney General. One governor served as Minister to Russia. Two governors (marked with *) resigned to take seats in the Senate, and two (marked with ) resigned their positions to take office as governor.
Additionally, two governors were elected to the U.S. Senate shortly after the American Civil War, but were did not take office: Lewis E. Parsons was refused his seat because Alabama had not yet been reconstructed, and John A. Winston would not take the oath of allegiance.
All representatives and senators listed represented Alabama except where noted.
Living former governors.
, six former governors were alive, the oldest being John M. Patterson (1959–1963, born 1921).
The most recent death of a former governor was that of H. Guy Hunt (1987–1993), who died on January 30, 2009.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1288'>
Apocrypha
Apocrypha are statements or claims that are of dubious authenticity. The word's origin is the Medieval Latin adjective 'apocryphus', 'secret, or non-canonical', from the Greek adjective ('apokryphos'), 'obscure', from the verb ('apokryptein'), 'to hide away'.
Introduction.
Apocrypha is commonly applied in Christian religious contexts involving certain disagreements about biblical canonicity. The pre-Christian-era Jewish translation (into Greek) of holy scriptures known as the Septuagint included the writings in dispute. However, the Jewish canon was not finalized until at least 100–200 years into the A.D., at which time considerations of Greek language and beginnings of Christian acceptance of the Septuagint weighed against some of the texts. Some were not accepted by the Jews as part of the Hebrew Bible canon. Over several centuries of consideration, the books of the Septuagint were finally accepted into the Christian Old Testament, by A.D. 405 in the west, and by the end of the fifth century in the east. The Christian canon thus established was retained for over 1,000 years, even after the 11th-century schism that separated the church into the branches known as the Roman Catholic and Eastern Orthodox churches.
Those canons were not challenged until the Protestant Reformation (16th century), when both the Roman Catholic and Eastern Orthodox Churches reaffirmed them. The reformers rejected the parts of the canon that were not part of the Hebrew Bible and established a revised Protestant canon. Thus, concerning the Old Testament books, what is thought of as the 'Protestant canon' is actually the final Hebrew canon. The differences can be found by looking here or by comparing the contents of the 'Protestant' and Catholic Bibles, and they represent the narrowest Christian application of the term 'Apocrypha'.
Among some Protestants, 'apocryphal' began to take on extra or altered connotations: not just 'of dubious authenticity', but 'having spurious or false content', not just 'obscure' but 'having hidden or suspect motives'. Protestants were (and are) not unanimous in adopting those meanings. The Church of England agreed, and that view continues today throughout the Lutheran Church, the worldwide Anglican Communion, and many other denominations. Whichever implied meaning is intended, 'Apocrypha' was (and is) used primarily by Protestants, in reference to the books of questioned canonicity. Catholics and Orthodox sometimes avoid using the term in contexts where it might be considered disputatious or be misconstrued as yielding on the point of canonicity. Very few Protestant published Bibles include the apocryphal books in a separate section (rather like an appendix), so as not to intermingle them with their canonical books.
Explaining the Eastern Orthodox Church's canon is made difficult because of differences of perspective with the Roman Catholic church in the interpretation of how it was done. Those differences (in matters of jurisdictional authority) were contributing factors in the separation of the Roman Catholics and Orthodox around 1054, but the formation of the canon was largely complete (fully complete in the Catholic view) by the fifth century, six centuries before the separation. In the eastern part of the church, it took much of the fifth century also to come to agreement, but in the end it was accomplished. The canonical books thus established by the undivided church became canon for what was later to become Roman Catholic and Eastern Orthodox alike. The East did already differ from the West in not considering every question of canon yet settled, and it subsequently adopted a few more books into its Old Testament. It also allowed consideration of yet a few more to continue not fully decided, which led in some cases to adoption in one or more jurisdictions, but not all. Thus, there are today a few remaining differences of canon among Orthodox, and all Orthodox accept a few more books than appear in the Catholic canon. Protestants accept none of these additional books as canon either, but see them having roughly the same status as the earlier Apocrypha. As Protestant awareness of the Eastern Orthodox increases in nations like the United States, interest in the full Orthodox canon might also increase enough for them to be published in the Apocrypha of some Protestant Bibles. That is not common yet in 2013, so they are not as widely available in English.
Before the fifth century, the Christian writings that were then under discussion for inclusion in the canon but had not yet been accepted were classified in a group known as the ancient antilegomenae. These were all candidates for the New Testament and included several books which were eventually accepted, such as: The Epistle to the Hebrews, 2 Peter, 3 John and the Revelation of John (Apocalypse). None of those accepted books can be considered Apocryphal now, since all Christendom accepts them as canonical. Of the uncanonized ones, the Early Church considered some heretical but viewed others quite well. Some Christians, in an extension of the meaning, might also consider the non-heretical books to be 'apocryphal' along the manner of Martin Luther: not canon, but useful to read. This category includes books such as the Epistle of Barnabas, the Didache, and The Shepherd of Hermas which are sometimes referred to as the Apostolic Fathers.
Examples.
Esoteric writings and objects.
The word 'apocryphal' () was first applied to writings which were kept secret because they were the vehicles of esoteric knowledge considered too profound or too sacred to be disclosed to anyone other than the initiated. For example, it is used in this sense to describe 'A Sacred and Secret Book of Moses, called Eighth the Holy ('). This is a text taken from a Leiden papyrus of the third or fourth century A.D. The text may be as old as the first century, but other proof of age has not been found. In a similar vein, the disciples of the Gnostic Prodicus boasted that they possessed the secret () books of Zoroaster. The term in general enjoyed high consideration among the Gnostics (see Acts of Thomas, pp. 10, 27, 44).
Renowned Sinologist Anna Seidel refers to texts and even items produced by ancient Chinese sages as apocryphal and studied their uses during Six Dynasties China (A.D. 220 to 589). These artifacts were used as symbols legitimizing and guaranteeing the Emperor's Heavenly Mandate. Examples of these include talismans, charts, writs, tallies, and registers. The first examples were stones, jade pieces, bronze vessels and weapons, but came to include talismans and magic diagrams. From their roots in Zhou era China (1066 to 256 B.C.) these items came to be surpassed in value by texts by the Han dynasty (206 B.C. to A.D. 220). Most of these texts have been destroyed as Emperors, particularly during the Han dynasty, collected these legitimizing objects and proscribed, forbade and burnt nearly all of them to prevent them from falling into the hands of political rivals. It is therefore fitting with the Greek root of the word, as these texts were obviously hidden away to protect the ruling Emperor from challenges to his status as Heaven's choice as sovereign.
Writings of questionable value.
'Apocrypha' was also applied to writings that were hidden not because of their divinity but because of their questionable value to the church. Many in Protestant traditions cite Revelation 22:18–19 as a potential curse for those who attach any canonical authority to extra-biblical writings such as the Apocrypha. However, a strict explanation of this text would indicate it was meant for only the Book of Revelation. Rv.22:18–19f. (KJV) states: 'For I testify unto every man that heareth the words of the prophecy of this book, If any man shall add unto these things, God shall add unto him the plagues that are written in this book: And if any man shall take away from the words of the book of this prophecy, God shall take away his part out of the book of life, and out of the holy city, and from the things which are written in this book.' In this case, if one holds to a strict hermeneutic, the 'words of the prophecy' do not refer to the Bible as a whole but to Jesus' 'Revelation' to John. Origen, in 'Commentaries on Matthew', distinguishes between writings which were read by the churches and apocryphal writings: ('writing not found on the common and published books in one hand, actually found on the secret ones on the other'). The meaning of αποκρυφος is here practically equivalent to 'excluded from the public use of the church', and prepares the way for an even less favourable use of the word.
Spurious writings.
In general use, the word 'apocrypha' came to mean 'false, spurious, bad, or heretical.' This meaning also appears in Origen's prologue to his commentary on the Song of Songs, of which only the Latin translation survives: 'De scripturis his, quae appellantur apocryphae, pro eo quod multa in iis corrupta et contra fidem veram inveniuntur a majoribus tradita non placuit iis dari locum nec admitti ad auctoritatem.' 'Concerning these scriptures, which are called apocryphal, for the reason that many things are found in them corrupt and against the true faith handed down by the elders, it has pleased them that they not be given a place nor be admitted to authority.'
Other.
Other uses of 'apocrypha' developed over the history of Western Christianity. The Gelasian Decree refers to religious works by church fathers Eusebius, Tertullian and Clement of Alexandria as apocrypha. Augustine defined the word as meaning simply 'obscurity of origin,' implying that any book of unknown authorship or questionable authenticity would be considered as apocryphal. On the other hand, Jerome (in 'Protogus Galeatus') declared that all books outside the Hebrew canon were apocryphal. In practice, Jerome treated some books outside the Hebrew canon as if they were canonical, and the Western Church did not accept Jerome's definition of apocrypha, instead retaining the word's prior meaning ('see: Deuterocanon'). As a result, various church authorities labeled different books as apocrypha, treating them with varying levels of regard.
Some apocryphal books were included in the Septuagint, a Greek translation of the Hebrew Scriptures compiled around 280 B.C., with little distinction made between them and the rest of the Old Testament. Origen, Clement and others cited some apocryphal books as 'scripture,' 'divine scripture,' 'inspired,' and the like. On the other hand, teachers connected with Palestine and familiar with the Hebrew canon excluded from the canon all of the Old Testament not found there. This view is reflected in the canon of Melito of Sardis, and in the prefaces and letters of Jerome. A third view was that the books were not as valuable as the canonical scriptures of the Hebrew collection, but were of value for moral uses, as introductory texts for new converts from paganism, and to be read in congregations. They were referred to as 'ecclesiastical' works by Rufinus.
These three opinions regarding the apocryphal books prevailed until the Protestant Reformation, when the idea of what constitutes canon became a matter of primary concern for Roman Catholics and Protestants alike. In 1546 the Catholic Council of Trent reconfirmed the canon of Augustine, dating to the second and third centuries, declaring 'He is also to be anathema who does not receive these entire books, with all their parts, as they have been accustomed to be read in the Catholic Church, and are found in the ancient editions of the Latin Vulgate, as sacred and canonical.' The whole of the books in question, with the exception of 1 Esdras and 2 Esdras and the Prayer of Manasseh, were declared canonical at Trent. The Protestants, in comparison, were diverse in their opinion of the deuterocanon. Some considered them divinely inspired, others rejected them. Anglicans took a position between the Catholic Church and the Protestant Churches; they kept them as Christian intertestamental readings and a part of the Bible, but no doctrine should be based on them. John Wycliffe, a 14th-century Christian Humanist, had declared in his biblical translation that 'whatever book is in the Old Testament besides these twenty-five shall be set among the apocrypha, that is, without authority or belief.' Nevertheless, his translation of the Bible included the apocrypha and the Epistle of the Laodiceans.
The respect accorded to apocryphal books varied between Protestant denominations. In both the German (1534) and English (1535) translations of the Bible, the apocrypha are published in a separate section from the other books, although the Lutheran and Anglican lists are different. In some editions (like the Westminster), readers were warned that these books were not 'to be any otherwise approved or made use of than other human writings.' A milder distinction was expressed elsewhere, such as in the 'argument' introducing them in the Geneva Bible, and in the Sixth Article of the Church of England, where it is said that 'the other books the church doth read for example of life and instruction of manners,' though not to establish doctrine.
According to the Orthodox Anglican Church:
Metaphorical usage.
The adjective 'apocryphal' is commonly used in modern English to refer to any text or story considered to be of dubious veracity or authority, although it may contain some moral truth. In this broader metaphorical sense, the word suggests a claim that is in the nature of folklore, factoid or urban legend.
Texts.
Judaism.
Although traditional rabbinical Judaism insists on the exclusive canonization of the current 24 books in the Tanakh, it also claims to have an oral law handed down from Moses. The Sadducees—unlike the Pharisees but like the Samaritans—seem to have maintained an earlier and smaller number of texts as canonical, preferring to hold to only what was written in the Law of Moses (making most of the presently accepted canon, both Jewish and Christian, 'apocryphal' in their eyes). Certain circles in Judaism, such as the Essenes in Judea and the Therapeutae in Egypt, were said to have a secret literature (see Dead Sea scrolls). Other traditions maintained different customs regarding canonicity. The Ethiopic Jews, for instance, seem to have retained a spread of canonical texts similar to the Ethiopian Orthodox Christians, cf 'Encyclopaedia Judaica', Vol 6, p 1147. A large part of this literature consisted of the apocalypses. Based on prophecies, these apocalyptic books were not considered scripture by all, but rather part of a literary form that flourished from 200 BCE to CE 100.
Intertestamental.
During the birth of Christianity, some of the Jewish apocrypha that dealt with the coming of the Messianic kingdom became popular in the rising Jewish Christian communities. Occasionally these writings were changed or added to, but on the whole it was found sufficient to reinterpret them as conforming to a Christian viewpoint. Christianity eventually gave birth to new apocalyptic works, some of which were derived from traditional Jewish sources. Some of the Jewish apocrypha were part of the ordinary religious literature of the Early Christians. This was strange, as the large majority of Old Testament references in the New Testament are taken from the Greek Septuagint, which is the source of the deuterocanonical books as well as most of the other biblical apocrypha.
Slightly varying collections of additional Books (called deuterocanonical by the Roman Catholic Church) form part of the Roman Catholic, Eastern Orthodox and Oriental Orthodox canons. See Development of the Old Testament canon.
The Book of Enoch is included in the biblical canon only of the Oriental Orthodox churches of Ethiopia and Eritrea. The Epistle of Jude quotes the book of Enoch, and some believe the use of this book also appears in the four gospels and 1 Peter. The genuineness and inspiration of Enoch were believed in by the writer of the Epistle of Barnabas, Irenaeus, Tertullian and Clement of Alexandria and much of the early church. The epistles of Paul and the gospels also show influences from the Book of Jubilees, which is part of the Ethiopian canon, as well as the Assumption of Moses and the Testaments of the Twelve Patriarchs, which are included in no biblical canon.
The high position which some apocryphal books occupied in the first two centuries was undermined by a variety of influences in the Christian church. All claims to the possession of a secret tradition (as held by many Gnostic sects) were denied by the influential theologians like Irenaeus and Tertullian, which modern historians refer to as the Proto-orthodox, the timeframe of true inspiration was limited to the apostolic age, and universal acceptance by the church was required as proof of apostolic authorship. As these principles gained currency, books deemed apocryphal tended to become regarded as spurious and heretical writings, though books now considered deuterocanonical have been used in liturgy and theology from the first century to the present.
Christianity.
New Testament apocrypha—books similar to those in the New Testament but almost universally rejected by Catholics, Orthodox and Protestants—include several gospels and lives of apostles. Some were written by early Jewish Christians (see the Gospel according to the Hebrews). Others of these were produced by Gnostic authors or members of other groups later defined as heterodox. Many texts believed lost for centuries were unearthed in the 19th and 20th centuries, producing lively speculation about their importance in early Christianity among religious scholars, while many others survive only in the form of quotations from them in other writings; for some, no more than the title is known. Artists and theologians have drawn upon the New Testament apocrypha for such matters as the names of Dismas and Gestas and details about the Three Wise Men. The first explicit mention of the perpetual virginity of Mary is found in the pseudepigraphical Infancy Gospel of James.
The Gnostic tradition was a prolific source of apocryphal gospels. While these writings borrowed the characteristic poetic features of apocalyptic literature from Judaism, Gnostic sects largely insisted on allegorical interpretations based on a secret apostolic tradition. With them, these apocryphal books were highly esteemed. A well-known Gnostic apocryphal book is the Gospel of Thomas, the only complete text of which was found in the Egyptian town of Nag Hammadi in 1945. The Gospel of Judas, a Gnostic gospel, also received much media attention when it was reconstructed in 2006.
Roman Catholics and Orthodox Christians as well as Protestants generally agree on the canon of the New Testament, see Development of the New Testament canon. The Ethiopian Orthodox have in the past also included I & II Clement and Shepherd of Hermas in their New Testament canon.
The Church of Jesus Christ of Latter-day Saints.
Joseph Smith, Jr. said that when compiling the inspired version of the Holy Bible, he inquired of Heavenly Father about what to do regarding the Apocrypha, the Deuterocanonical Books of the Catholic bible, that are not the 66 books contained in the 1769 edition of the Authorized King James Bible. What Smith claimed to receive from God is now stated in Section 91 of the Doctrine and Covenants of The Church of Jesus Christ of Latter-day Saints.
'Verily, thus saith the Lord unto you concerning the Apocrypha-There are many things contained therein that are true, and it is mostly translated correctly; There are many things that are not true, which are interpolations by the hands of men. Verily, I say unto you, that it is not needful that the Apocrypha should be translated. Therefore, whoso readeth it, let him understand, for the spirit manifesteth truth; And whoso is enlightened by the Spirit shall obtain benefit therefrom; And whoso receiveth not by the Spirit, cannot be benefited. Therefore it is not needful that it should be translated. Amen.'
The 91st Section of the Doctrine and Covenants is the reason that The Church of Jesus Christ of Latter-day Saints currently uses the 1769 edition of the Authorized King James Bible along with excerpts from the Joseph Smith Translation (JST). Furthermore, despite having canonized the 1769 edition of the Authorized King James Bible, Joseph Smith Jr. made a note that the Song of Songs was not inspired, and therefore it is considered Apocrypha despite it being contained in the canon. The Community of Christ, another offshoot of the Latter Day Saint movement, has canonized the JST and therefore has excluded the Song of Solomon.
Confucianism and Taoism.
Prophetic texts called the 'Ch'an-wei' were written by Han Dynasty (206 BCE to 220 CE) Taoist priests to legitimize as well as curb imperial power. They deal with treasure objects that were part of the Zhou (1066 to 256 BCE) royal treasures. Emerging from the instability of the Warring States Period (476–221 BCE), ancient Chinese scholars saw the centralized rule of the Zhou as an ideal model for the new Han empire to emulate. The 'Ch'an-wei' are therefore texts written by Han scholars about the Zhou royal treasures, only they were not written to record history for its own sake, but for legitimizing the current imperial reign. These texts took the form of stories about texts and objects being conferred upon the Emperors by Heaven and comprising these ancient sage-king's (this is how the Zhou emperors were referred to by this time, about 500 years after their peak) royal regalia. The desired effect was to confirm the Han emperor's Heavenly Mandate through the continuity offered by his possession of these same sacred talismans. It is because of this politicized recording of their history that it is difficult to retrace the exact origins of these objects. What is known is that these texts were most likely produced by a class of literati called the 'fangshi'. These were a class of nobles who were not part of the state administration; they were considered specialists or occultists, for example diviners, astrologers, alchemists or healers. It is from this class of nobles that the first Taoist priests are believed to have emerged. Seidel points out however that the scarcity of sources relating to the formation of early Taoism make the exact link between the apocryphal texts and the Taoist beliefs unclear.
Buddhism.
Apocryphal Jatakas of the Pali Buddhist canon, such as those belonging to the Paññāsajātaka collection, have been adapted to fit local culture in certain South East Asian countries and have been retold with amendments to the plots to better reflect Buddhist morals.
Within the Pali tradition, the apocryphal Jatakas of later composition (some dated even to the 19th century) are treated as a separate category of literature from the 'Official' Jataka stories that have been more-or-less formally canonized from at least the 5th century—as attested to in ample epigraphic and archaeological evidence, such as extant illustrations in bas relief from ancient temple walls.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1291'>
Antarctic Treaty System
The Antarctic Treaty and related agreements, collectively known as the Antarctic Treaty System (ATS), regulate international relations with respect to Antarctica, Earth's only continent without a native human population. For the purposes of the treaty system, Antarctica is defined as all of the land and ice shelves south of 60°S latitude. The treaty, entering into force in 1961 and currently having 50 parties, sets aside Antarctica as a scientific preserve, establishes freedom of scientific investigation and bans military activity on that continent. The treaty was the first arms control agreement established during the Cold War. The Antarctic Treaty Secretariat headquarters have been located in Buenos Aires, Argentina, since September 2004.
The main treaty was opened for signature on December 1, 1959, and officially entered into force on June 23, 1961. The original signatories were the 12 countries active in Antarctica during the International Geophysical Year (IGY) of 1957–58. The twelve countries had significant interests in Antarctica at the time: Argentina, Australia, Belgium, Chile, France, Japan, New Zealand, Norway, South Africa, the Soviet Union, the United Kingdom and the United States. These countries had established over 50 Antarctic stations for the IGY. The treaty was a diplomatic expression of the operational and scientific cooperation that had been achieved 'on the ice'.
Articles of the Antarctic Treaty.
The main objective of the ATS is to ensure in the interests of all humankind that Antarctica shall continue forever to be used exclusively for peaceful purposes and shall not become the scene or object of international discord. Pursuant to Article 1, the treaty forbids any measures of a military nature, but not the presence of military personnel or equipment for the purposes of scientific research.
Other agreements.
Other agreements — some 200 recommendations adopted at treaty consultative meetings and ratified by governments — include:
Meetings.
The Antarctic Treaty System's yearly 'Antarctic Treaty Consultative Meetings (ATCM)' are the international forum for the administration and management of the region. Only 29 of the 50 parties to the agreements have the right to participate in decision-making at these meetings, though the other 21 are still allowed to attend. The decision-making participants are the 'Consultative Parties' and, in addition to the 12 original signatories, include 17 countries that have demonstrated their interest in Antarctica by carrying out substantial scientific activity there.
Parties.
As of 2014, there are 50 states party to the treaty, 29 of which, including all 12 original signatories to the treaty, have consultative (voting) status. Consultative members include the seven nations that claim portions of Antarctica as national territory. The 43 non-claimant nations either do not recognize the claims of others, or have not stated their positions.
Note: The table can be sorted alphabetically or chronologically using the icon.
Claims overlap.<br>
Reserved the right to claim areas.
Antarctic Treaty Secretariat.
The 'Antarctic Treaty Secretariat' was established in Buenos Aires, Argentina in September 2004 by the Antarctic Treaty Consultative Meeting (ATCM). Jan Huber (Netherlands) served as the first Executive Secretary for five years until August 31, 2009. He was succeeded on September 1, 2009 by Manfred Reinke (Germany).
The tasks of the Antarctic Treaty Secretariat can be divided into the following areas:
Legal system.
Antarctica has no permanent population and therefore it has no citizenship nor government. All personnel present on Antarctica at any time are citizens or nationals of some sovereignty outside Antarctica, as there is no Antarctic sovereignty. The majority of Antarctica is claimed by one or more countries, but most countries do not explicitly recognize those claims. The area on the mainland between 90 degrees west and 150 degrees west, combined with the interior of the Norwegian Sector (the extent of which has never been officially defined), is the only major land on Earth not claimed by any country. The Treaty prohibits any nation from claiming this area.
Governments that are party to the Antarctic Treaty and its Protocol on Environmental Protection implement the articles of these agreements, and decisions taken under them, through national laws. These laws generally apply only to their own citizens, wherever they are in Antarctica, and serve to enforce the consensus decisions of the consultative parties: about which activities are acceptable, which areas require permits to enter, what processes of environmental impact assessment must precede activities, and so on. The Antarctic Treaty is often considered to represent an example of the Common heritage of mankind principle.
Argentina.
According to Argentine regulations, any crime committed within 50 kilometers of any Argentine base is to be judged in Ushuaia (as capital of Tierra del Fuego, Antarctica, and South Atlantic Islands). In the part of Argentine Antarctica that is also claimed by Chile and UK, the person to be judged can ask to be transferred there.
Australia.
Since the designation of the Australian Antarctic Territory pre-dated the signing of the Antarctic Treaty, Australian laws that relate to Antarctica date from more than two decades before the Antarctic Treaty era. In terms of criminal law, the laws that apply to the Jervis Bay Territory (which follows the laws of the Australian Capital Territory) apply to the Australian Antarctic Territory. Key Australian legislation applying Antarctic Treaty System decisions include the 'Antarctic Treaty Act 1960', the 'Antarctic Treaty (Environment Protection) Act 1980' and the 'Antarctic Marine Living Resources Conservation Act 1981'.
United States.
The law of the United States, including certain criminal offenses by or against U.S. nationals, such as murder, may apply to areas not under jurisdiction of other countries. To this end, the United States now stations special deputy U.S. Marshals in Antarctica to provide a law enforcement presence.
Some U.S. laws directly apply to Antarctica. For example, the Antarctic Conservation Act, Public Law 95-541, 'et seq.', provides civil and criminal penalties for the following activities, unless authorized by regulation or statute:
Violation of the Antarctic Conservation Act carries penalties of up to US$10,000 in fines and one year in prison. The Departments of the Treasury, Commerce, Transportation, and the Interior share enforcement responsibilities. The Act requires expeditions from the U.S. to Antarctica to notify, in advance, the Office of Oceans and Polar Affairs of the State Department, which reports such plans to other nations as required by the Antarctic Treaty. Further information is provided by the Office of Polar Programs of the National Science Foundation.
New Zealand.
In 2006, the New Zealand police reported that jurisdictional issues prevented them issuing warrants for potential American witnesses who were reluctant to testify during the Christchurch Coroner's investigation into the death by poisoning of Australian astrophysicist Rodney Marks at the South Pole base in May 2000. Dr. Marks died while wintering over at the United States' Amundsen–Scott South Pole Station located at the geographic South Pole. Prior to autopsy, the death was attributed to natural causes by the National Science Foundation and the contractor administering the base. However, an autopsy in New Zealand revealed that Dr. Marks died from methanol poisoning. The New Zealand Police launched an investigation. In 2006, frustrated by lack of progress, the Christchurch Coroner said that it was unlikely that Dr. Marks ingested the methanol knowingly, although there is no certainty that he died as the direct result of the act of another person. During media interviews, the police detective in charge of the investigation criticized the National Science Foundation and contractor Raytheon for failing to co-operate with the investigation.
South Africa.
South African law applies to all South African citizens in Antarctica, and they are subject to the jurisdiction of the magistrate's court in Cape Town. In regard to violations of the Antarctic Treaty and related agreements, South Africa also asserts jurisdiction over South African residents and members of expeditions organised in South Africa.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1293'>
Alfred Lawson
Alfred William Lawson (March 24, 1869 – November 29, 1954) was a professional baseball player, manager and league promoter from 1887 through 1916 and went on to play a pioneering role in the US aircraft industry, publishing two early aviation trade journals. In 1904, he also wrote a novel, 'Born Again', clearly inspired by the popular Utopian fantasy 'Looking Backward' by Edward Bellamy, an early harbinger of the metaphysical turn his career would take with the theory of Lawsonomy. He is frequently cited as the inventor of the airliner and was awarded several of the first air mail contracts, which he ultimately could not fulfill. He founded the Lawson Aircraft Company in Green Bay, Wisconsin, to build military training aircraft and later the Lawson Airplane Company in Milwaukee, Wisconsin, to build airliners. The crash of his ambitious Lawson L-4 'Midnight Liner' during its trial flight takeoff on May 8, 1921, ended his best chance for commercial aviation success.
Baseball career (1888-1907).
He made one start for the Boston Beaneaters and two for the Pittsburgh Alleghenys during the 1890 season. His minor league playing career lasted through 1895. He later managed in the minors from 1905 to 1907.
Union Professional League.
In 1908 he started a new professional baseball league known as the Union Professional League. The league took the field in April but folded one month later owing to financial difficulties.
Aviation career (1908-1928).
An early advocate or rather evangelist of aviation, in October 1908 Mr. Lawson started the magazine 'Fly' to stimulate public interest and educate readers in the fundamentals of the new science of aviation. It sold for 10 cents a copy from newsstands across the country. In 1910, moving to New York City, he renamed the magazine 'Aircraft' and published it until 1914. The magazine chronicled the technical developments of the early aviation pioneers. He was the first advocate for commercial air travel, coining the term 'airline.' He also advocated for a strong American flying force, lobbying Congress in 1913 to expand its appropriations for Army aircraft.
In early 1913, he learned to fly the Sloan-Deperdussin and the Moisant-Bleriot monoplanes, becoming an accomplished pilot. Later that year he bought a Thomas flying boat and became the first air commuter regularly flying from his country house in Seidler's Beach NJ to the foot of 75th Street in NYC (about 35 miles). In 1917, utilizing the knowledge gained from 10 years advocating aviation, he built his first airplane, the Lawson Military Tractor 1 (MT-1) trainer, and founded the Lawson Aircraft Corporation. The company's plant was sited at Green Bay, WI. There he secured a contract and built the Lawson MT-2. He also designed the steel fuselage Lawson Armored Battler, which never got beyond the drafting board, given doubts within the Army aviation community and the signing of the armistice.
After the war, in 1919 Lawson started a project to build America's first airline. He secured financial backing, and in five months he had built and demonstrated in flight his biplane airliner, the 18-passenger Lawson L-2. He demonstrated its capabilities in a 2000-mile multi-city tour from Milwaukee to Chicago-Toledo-Cleveland-Buffalo-Syracuse-New york City-Washington DC-Collinsville-Dayton-Chicago and back to Milwaukee, creating a buzz of positive press. The publicity allowed him to secure an additional $1 million to build the 26-passenger Midnight Liner. In late 1920, he secured government contracts for three airmail routes and to deliver 10 war planes, but owing to the fall 1920 recession, he could not secure the necessary $100,000 in cash reserves called for in the contracts and had to decline them. In 1926 he started his last airliner, the 56-seat, two-tier Lawson super airliner. The aircraft crashed on takeoff on its maiden flight.
In this phase of his life, he was considered one of the leading thinkers in the budding American commercial aviation community, but his troubles with getting financial backing for his ideas led him to turn to economics, philosophy, and organization.
Lawsonomy (1929-1954).
In the 1920s, he promoted health practices, including vegetarianism, and claimed to have found the secret of living to 200. He also developed his own highly unusual theories of physics, according to which such concepts as 'penetrability', 'suction and pressure' and 'zig-zag-and-swirl' were discoveries on par with Einstein's Theory of Relativity. He published numerous books on these concepts, all set in a distinctive typography. Lawson repeatedly predicted the worldwide adoption of Lawsonian principles by the year 2000.
He later propounded his own philosophy, Lawsonomy, and the Lawsonian religion. He also developed, during the Great Depression, the populist economic theory of 'Direct Credits', according to which banks are the cause of all economic woes, the oppressors of both capital and labour. Lawson believed that the government should replace banks as the provider of loans to business and workers. His rallies and lectures attracted thousands of listeners in the early 1930s, mainly in the upper Midwest, but by the late '30s the crowds had dwindled.
In 1943, he founded the University of Lawsonomy in Des Moines to spread his teachings and offer the degree of 'Knowledgian,' but after various IRS and other investigations it was closed and finally sold in 1954, the year of Lawson's death. Lawson's financial arrangements remain mysterious to this day, and in later years he seems to have owned little property, moving from city to city as a guest of his farflung acolytes. In 1952, he was brought before a United States Senate investigative committee on allegations that his organization had bought war surplus machines and then sold them for a profit, despite claiming non-profit status. His attempt to explain Lawsonomy to the senators ended in mutual frustration and bafflement.
A farm near Racine, Wisconsin, is the only remaining university facility, although a tiny handful of churches may yet survive in places such as Wichita, Kansas. The large sign, formerly reading 'University of Lawsonomy', was a familiar landmark for motorists in the region for many years and was visible from I-94 about 13 miles north of the Illinois state line, on the east side of the highway. Although the sign still exists, the 'of' has now been replaced by the URL of their website. As of a storm in spring 2009, the sign is no longer there, although the supporting posts are still visible. On the northbound side of I-94, a sign on the roof of the building nearest the freeway says 'Study Natural Law.'
Personal.
Lawson's brother, George H. Lawson, founded the United States League in 1910. The new professional baseball league had the intent to racially integrate. The league lasted less than a season, but it was revived for one season by George Lawson's associates in 1912.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1298'>
Ames, Iowa
Ames is a city located in the central part of the U.S. state of Iowa in Story County. Lying approximately north of Des Moines, it had a 2010 population of 58,965. The U.S. Census Bureau designates the Ames metropolitan statistical area as encompassing all of Story County; combined with the Boone, Iowa micropolitan statistical area (Boone County, Iowa), the pair make up the larger Ames-Boone combined statistical area. While Ames is the largest city in Story County, the county seat is in the nearby city of Nevada east of Ames.
Ames is the home of Iowa State University of Science and Technology (ISU), a public research institution with leading Agriculture, Design, Engineering, and Veterinary Medicine colleges. ISU is the nation's first designated land-grant university, and the birthplace of the Atanasoff–Berry Computer, the world's first electronic digital computer. Ames hosts one of two national sites for the United States Department of Agriculture's Animal and Plant Health Inspection Service (APHIS), which comprises the National Veterinary Services Laboratory and the Center for Veterinary Biologics. Ames is also the home of the USDA's Agricultural Research Service's National Animal Disease Center. NADC is the largest federal animal disease center in the U.S., conducting research aimed at solving animal health and food safety problems faced by livestock producers and the public. Ames has the headquarters for the Iowa Department of Transportation.
In 2010, Ames was ranked ninth on CNNMoney.com 'Best Places to Live' list.
History.
The city was founded in 1864 as a station stop on the Cedar Rapids and Missouri Railroad and was named after 19th century U.S. Congressman Oakes Ames of Massachusetts, who was influential in the building of the transcontinental railroad. Ames was founded by local resident Cynthia Olive Duff (née Kellogg) and railroad man John Insley Blair, near a location that was deemed favorable for a railroad crossing of the Skunk River.
Geography.
According to the United States Census Bureau, the city has a total area of , of which is land and is water.
Ames is located on Interstate 35, U.S. Route 30 & 69, and the cross country line of the Union Pacific Railroad, located roughly north of the state capital Des Moines. Two small streams run through the town: the South Skunk River and Squaw Creek.
Neighborhoods.
Ames is made up of several distinct neighborhoods, including Allenview, Bentwood, Bloomington Heights, Broadmoor, Campustown, College Heights, College Park, Colonial Village (Ames's first modern housing development, dating to 1939), Country Gables, Dauntless, Dayton Park, East Hickory Park, Gateway Green Hills, Gateway Hills, Hillside, Iowa State University, Little Hollywood, Main Street Cultural District (Downtown Ames), Melrose Park, Northridge Heights, Northridge Parkway, Old Town Historic Preservation District, Ontario Heights, Parkview Heights, Ridgewood, Ringgenberg Park, Suncrest, Somerset, South Fork, South Gateway, Spring Valley, Stone Brooke, Sunset Ridge, and West Ames.
Campustown.
Campustown is the neighborhood directly south of Iowa State University Central Campus bordered by Lincoln Way on the north. Campustown is a high-density mixed-use neighborhood that is home to many student apartments, nightlife venues, restaurants, and numerous other establishments, most of which are unique to Ames.
Climate.
Ames has a humid continental climate (Köppen climate classification 'Dfa'). On average, the warmest month is July and the coldest is January. The highest recorded temperature was in 1988 and the lowest was −28 °F in 1996.
Demographics.
2010 census.
As of the census of 2010, there were 58,965 people, 22,759 households, and 9,959 families residing in the city. The population density was . There were 23,876 housing units at an average density of . The racial makeup of the city was 84.5% White, 3.4% African American, 0.2% Native American, 8.8% Asian, 1.1% from other races, and 2.0% from two or more races. Hispanic or Latino of any race were 3.4% of the population.
There were 22,759 households of which 19.1% had children under the age of 18 living with them, 35.6% were married couples living together, 5.4% had a female householder with no husband present, 2.7% had a male householder with no wife present, and 56.2% were non-families. 30.5% of all households were made up of individuals and 6.2% had someone living alone who was 65 years of age or older. The average household size was 2.25 and the average family size was 2.82.
The median age in the city was 23.8 years. 13.4% of residents were under the age of 18; 40.5% were between the ages of 18 and 24; 22.9% were from 25 to 44; 15% were from 45 to 64; and 8.1% were 65 years of age or older. The gender makeup of the city was 53.0% male and 47.0% female.
2000 census.
As of the census of 2000, there were 50,731 people, 18,085 households, and 8,970 families residing in the city. The population density was 2,352.3 people per square mile (908.1/km²). There were 18,757 housing units at an average density of 869.7 per square mile (335.7/km²). The racial makeup of the city was 87.34% White, 7.70% Asian, 2.65% African American, 0.04% American Indian, 0.76% Pacific Islander and other races, and 1.36% from two or more races. Hispanic or Latino of any race were 1.98% of the population.
There were 18,085 households out of which 22.3% had children under the age of 18 living with them, 42.0% were married couples living together, 5.3% had a female householder with no husband present, and 50.4% were non-families. 28.5% of all households were made up of individuals and 5.9% had someone living alone who was 65 years of age or older. The average household size was 2.30 and the average family size was 2.85.
Age spread: 14.6% under the age of 18, 40.0% from 18 to 24, 23.7% from 25 to 44, 13.9% from 45 to 64, and 7.7% who were 65 years of age or older. The median age was 24 years. For every 100 females there were 109.3 males. For every 100 females age 18 and over, there were 109.9 males.
The median income for a household in the city was $36,042, and the median income for a family was $56,439. Males had a median income of $37,877 versus $28,198 for females. The per capita income for the city was $18,881. About 7.6% of families and 20.4% of the population were below the poverty line, including 9.2% of those under age 18 and 4.1% of those age 65 or over.
Metropolitan area.
Ames is the larger principal city of the Ames–Boone CSA, a Combined Statistical Area that includes the Ames metropolitan area (Story County) and the Boone micropolitan area (Boone County), which had a combined population of 106,205 at the 2000 census.
Economy.
Ames is home of Iowa State University of Science and Technology, a public land-grant and space-grant research university, and member of the prestigious Association of American Universities. At its founding in 1858, Iowa State was formerly known as the Iowa State College of Agriculture and Mechanic Arts. Ames is the home of the closely allied U.S. Department of Agriculture's National Animal Disease Center (See Ames strain), the U.S. Department of Energy's Ames Laboratory (a major materials research and development facility), and the main offices of the Iowa Department of Transportation. State and Federal institutions are the largest employers in Ames.
Other area employers include a 3M manufacturing plant; Sauer-Danfoss, a hydraulics manufacturer; Barilla, a pasta manufacturer; Ball, a manufacturer of canning jars and plastic bottles; Renewable Energy Group, America's largest producer of biomass-based diesel; and the National Farmers Organization.
Top employers.
According to Ames's 2013 Comprehensive Annual Financial Report, the top employers in the city are:
Arts and culture.
Velma Wallace Rayness
Ames, Iowa was home to Gerard M. and Velma Wallace Rayness.
Both artists taught art and were nationally recognized artists.
Their art was exhibited nationally as well as abroad.
Gerard died in the 1940s. Velma Wallace Rayness died in 1977.
Velma Wallace Rayness usually signed her paintings 'V.W. Rayness'
Sports.
The Iowa State University Cyclones play a variety of sports in the Ames area. The Cyclones' football team plays at Jack Trice Stadium near Ames. Also, the Cyclones' Men's and Women's Basketball teams and Volleyball team play at Hilton Coliseum just across the street from Jack Trice Stadium. The Iowa State Cyclones are a charter member of the Big 12 Conference in all sports and compete in NCAA Division I-A.
The provides recreational to professional level skating opportunities. The club sponsors the Learn to Skate Program. Coaches provide on and off ice lessons or workshops. The club hosts the figure skating portion of the Iowa Games competition every summer. In the fall the club hosts Cyclone Country Championships. Every year the club puts on the Winter Gala. The big event is the annual Spring Ice Show where young to adult skaters can perform their best moves.
Parks and recreation.
The Ames area has a large number of parks and arboretums.
Specialized Parks:
Community Parks:
Neighborhood Parks:
Education.
Ames High School: Grades 9–12
Iowa State University.
Iowa State University of Science and Technology, more commonly known as Iowa State University (ISU), is a public land-grant and space-grant research university located in Ames. Iowa State has produced a number of astronauts, scientists, Nobel laureates, Pulitzer Prize winners, and a variety of other notable individuals in their respective fields. Until 1945 it was known as the Iowa State College of Agriculture and Mechanic Arts. The university is a member of the American Association of Universities and the Big 12 Conference.
In 1856, the Iowa General Assembly enacted legislation to establish the State Agricultural College and Model Farm. Story County was chosen as the location on June 21, 1859, from proposals by Johnson, Kossuth, Marshall, Polk, and Story counties. When Iowa accepted the provisions of the Morrill Act of 1862, Iowa State became the first institution in nation designated as a land-grant college. The institution was coeducational from the first preparatory class admitted in 1868. The formal admitting of students began the following year, and the first graduating class of 1872 consisted of 24 men and 2 women.
The first building on the Iowa State campus was Farm House. Built in the 1860s, it currently serves as a museum and National Historic Landmark. Today, Iowa State has over 60 notable buildings, including Beardshear Hall, Morrill Hall, Memorial Union, Catt Hall, Curtiss Hall, Carver Hall, Parks Library, the Campanile, Hilton Coliseum, C.Y. Stephens Auditorium, Fisher Theater, Jack Trice Stadium, Lied Recreation Center, numerous residence halls, and many buildings specific to ISU's many different majors and colleges. Iowa State is home to 28,080 students (Spring 2012) and makes up approximately one half of the city's population.
The official mascot for ISU is Cy the Cardinal. The official school colors are cardinal and gold. The Iowa State Cyclones play in the NCAA's Division I-A as a member of the Big 12 Conference.
Infrastructure.
Transportation.
The town is served by U.S. Highways 30 and 69 and Interstate 35. Ames is the only town in Iowa with a population of greater than 50,000 that does not have a state highway serving it.
Ames was serviced by the Fort Dodge, Des Moines and Southern Railroad via a branch from Kelley to Iowa State University and to downtown Ames. The tracks were removed in the 1960s. The Chicago and North Western Transportation Company twin mainline runs east and west bisecting the town and running just south of the downtown business district. The C&NW used to operate a branch to Des Moines. This line was removed in the 1980s when the Spine Line through Nevada was purchased from the Rock Island Railroad after its bankruptcy. The Union Pacific, successor to the C&NW, still runs 60–70 trains a day through Ames on twin mainlines, which leads to some traffic delays. There is also a branch to Eagle Grove that leaves Ames to the north. The Union Pacific maintains a small yard called Ames Yard east of Ames between Ames and Nevada. Ames has been testing automatic train horns at several of its crossings. These directional horns which are focused down the streets are activated when the crossing signals turn on and are shut off after the train crosses the crossing. This system cancels out the need for the trains to blow their horns. Train noise had been a problem in the residential areas to the west and northwest of downtown.
Ames has a municipal airport located southeast of the city. The current (and only) FBO is Hap's Air Service, a company which has been based at the airport since 1975. The airport has two runways – 01/19, which is , and 13/31, which is .
The City of Ames offers a transit system throughout town, called CyRide, that is funded jointly by Iowa State University, the ISU Government of the Student Body, and the City of Ames. Rider fares are subsidized through this funding, and are free for children under five. Students pay a set cost as part of their tuition.
Ames has the headquarters of the Iowa Department of Transportation.
Health care.
Ames is served by Mary Greeley Medical Center, a 220-bed regional referral hospital which is adjacent to McFarland Clinic PC, central Iowa's largest physician-owned multi-specialty clinic, and also Iowa Heart Center.
Other topics.
Politics.
Iowa is a 'battleground state' that has trended slightly Democratic in recent years, and Ames, like Iowa City, also trends Democratic. Because Iowa is the first caucus state and Ames is a college town, it is the site of many political appearances, debates and events, especially during election years.
During every August in which the Republican presidential nomination is undecided (meaning there is no incumbent Republican president—as in, most recently, 2011, 2007, 1999, 1995 and 1987), the town plays host to the Ames Straw Poll, which gauges support for the various Republican candidates amongst attendees of a fundraising dinner benefiting the Iowa Republican Party. The straw poll dates back to 1979, and is frequently seen as a first test of organizational strength in Iowa by the national media and party insiders; as such, it can be very beneficial for a candidate to win the straw poll and thus enhance the candidate's aura of inevitability or show off a superior field operation.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1300'>
Abalone
Abalone ( or ; via Spanish ', from the (Rumsen language) 'aulón') is a common name for any of a group of small to very large edible sea snails, marine gastropod molluscs in the family Haliotidae.
Other common names are ear shells, sea ears, and muttonfish or muttonshells in Australia, ormer in Great Britain, and in New Zealand.
The family Haliotidae contains only one genus, 'Haliotis', which used to contain 6 subgenera. These subgenera have become alternate representations of 'Haliotis'. The number of species recognized worldwide ranges between 30 and 130 with over 230 species-level taxa described. The most comprehensive treatment of the family considers 56 species valid, with 18 additional subspecies.
The shells of abalones have a low open spiral structure, and are characterized by several open respiratory pores in a row near the shell's outer edge. The thick inner layer of the shell is composed of nacre (mother-of-pearl), which in many species is highly iridescent, giving rise to a range of strong changeable colors, which make the shells attractive to humans as decorative objects, jewelry, and as a source of colorful mother-of-pearl.
The flesh of abalones is widely considered to be a desirable food, and is consumed raw or cooked in a variety of cultures.
Description.
The shell of abalones is convex, rounded to oval shape, and may be highly arched or very flattened. The shell of the majority of species is ear-shaped, presenting a small flat spire and two to three whorls. The last whorl, known as the body whorl, is auriform, meaning that the shell resembles an ear, giving rise to the common name 'ear shell'. 'Haliotis asinina' has a somewhat different shape, as it is more elongated and distended. The shell of 'Haliotis cracherodii cracherodii' is also unusual as it has an ovate form, is imperforate, shows an exserted spire, and has prickly ribs.
A mantle cleft in the shell impresses a groove in the shell, in which are the row of holes characteristic of the genus. These holes are respiratory apertures for venting water from the gills and for releasing sperm and eggs into the water column. They make up what is known as the selenizone which form as the shell grows. This series of 8 to 38 holes is near the anterior margin. Only a small number are generally open. The older holes are gradually sealed up as the shell grows and new holes form. Each species has a typical number of open holes, between four and ten, in the selenizone. Abalone have no operculum. The aperture of the shell is very wide and nacreous.
The exterior of the shell is striated and dull. The color of the shell is very variable from species to species which may reflect the animal's diet. The iridescent nacre that lines the inside of the shell varies in color from silvery white, to pink, red and green-red to deep blue, green to purple.
The animal shows fimbriated head-lobes. The side-lobes are also fimbriated and cirrated. The rounded foot is very large. The radula has small median teeth, and the lateral teeth are single and beam-like. There are about 70 uncini, with denticulated hooks, the first four very large. The soft body is coiled around the columellar muscle, and its insertion, instead of being on the columella, is on the middle of the inner wall of the shell. The gills are symmetrical and both well developed.
These snails cling solidly with their broad muscular foot to rocky surfaces at sublittoral depths, although some species such as 'Haliotis cracherodii' used to be common in the intertidal zone. Abalones reach maturity at a relatively small size. Their fecundity is high and increases with their size (from 10,000 to 11 million eggs at a time). The spermatozoa are filiform and pointed at one end, and the anterior end is a rounded head.
The larvae are lecithotrophic. The adults are herbivorous and feed with their rhipidoglossan radula on macroalgae, preferring red or brown algae. Sizes vary from ('Haliotis pulcherrima') to while 'Haliotis rufescens' is the largest of the genus at .
Abalones are herbivorous on hard substrata.
By weight, approximately 1/3 of the animal is edible meat, 1/3 is offal, and 1/3 is shell.
Distribution.
The haliotid family has a worldwide distribution, along the coastal waters of every continent, except the Pacific coast of South America, the East Coast of the United States, the Arctic, and Antarctica The majority of abalone species are found in cold waters, such as off the coasts of New Zealand, South Africa, Australia, Western North America, and Japan.
Structure and properties of the shell.
The shell of the abalone is exceptionally strong and is made of microscopic calcium carbonate tiles stacked like bricks. Between the layers of tiles is a clingy protein substance. When the abalone shell is struck, the tiles slide instead of shattering and the protein stretches to absorb the energy of the blow. Material scientists around the world are studying this tiled structure for insight into stronger ceramic products such as body armor. The dust created by grinding and cutting abalone shell is dangerous; appropriate safeguards must be taken to protect people from inhaling these particles.
Diseases and pests.
Abalones are subject to various diseases. The Victorian Department of Primary Industries said in 2007 that ganglioneuritis, or AVG, killed up to 90% of stock in affected regions. Abalone are also severe hemophiliacs as their fluids will not clot in the case of a laceration or puncture wound. Members of the Spionidae family of the polychaetes are known as pests of abalone.
Human use.
The meat (foot muscle) of abalone is used for food, and the shells of abalone are used as decorative items and as a source of mother of pearl for jewelry, buttons, buckles, and inlay. Abalone shells have been found in archaeological sites around the world, ranging from 75,000 year old deposits at Blombos Cave in South Africa to historic Chinese abalone middens on California's Northern Channel Islands. On the Channel Islands, where abalones were harvested by Native Americans for at least 12,000 years, the size of red abalone shells found in middens declines significantly after about 4000 years ago, probably due to human predation. Worldwide, abalone pearls have also been collected for centuries.
Farming.
Farming of abalone began in the late 1950s and early 1960s in Japan and China. Since the mid-1990s, there have been many increasingly successful endeavors to commercially farm abalone for the purpose of consumption. Over-fishing and poaching have reduced wild populations to such an extent that farmed abalone now supplies most of the abalone meat consumed. The principal abalone farming regions are China, Taiwan, Japan, and Korea. Abalone is also farmed in Australia, Canada, Chile, France, Iceland, Ireland, Mexico, Namibia, New Zealand, South Africa, Thailand, and the United States.
Consumption.
Abalone have long been a valuable food source for humans in every area of the world where a species is abundant. The meat of this mollusc is considered a delicacy in certain parts of Latin America (especially Chile), France, New Zealand, Southeast Asia, and East Asia (especially in China, Vietnam, Japan, and Korea). In Chinese speaking regions, abalone are commonly known as bao yu, and sometimes form part of a Chinese banquet . In the same way as shark fin soup or bird's nest soup, abalone is considered a luxury item, and is traditionally reserved for special occasions such as weddings and other celebrations . However, the availability of commercially farmed abalone has allowed more common consumption of this once rare delicacy .
In Japan, live and raw abalone are used in awabi sushi, or served steamed, salted, boiled, chopped, or simmered in soy sauce. Salted, fermented abalone entrails are the main component of tottsuru, a local dish from Honshū. Tottsuru is mainly enjoyed with sake.
In California, abalone meat can be found on pizza, sautéed with caramelized mango or in steak form dusted with cracker meal and flour.
Sport harvesting.
Australia.
Tasmania supplies approximately 25% of the yearly world abalone harvest. Around 12,500 Tasmanians recreationally fish for blacklip and greenlip abalone. For blacklip abalone, the size limit varies from between for the southern end of the state and for the northern end of the state. Greenlip abalone have a minimum size of , except for an area around Perkin's Bay in the north of the state where the minimum size is . With a recreational abalone licence, there is a bag limit of 10 per day, and a total possession limit of 20. Scuba diving for abalone is allowed, and has a rich history in Australia. (Scuba diving for abalone in the states of New South Wales and Western Australia is illegal; a free-diving catch limit of two is allowed).
Victoria has had an active abalone fishery since the late 1950s. The state is sectioned into three fishing zones, Eastern, Central and Western with each fisher required a zone allocated licence. Harvesting is performed by divers using surface supplied air 'hookah' systems operating from runabout style, outboard powered boats. While the diver seeks out colonies of abalone amongst the reef beds the deckhand operates the boat, known as working 'live' and stays above where the diver is working. Bags of abalone pried from the rocks are brought to the surface by the diver or by way of 'shot line', where the deckhand drops a weighted rope for the catch bag to be connected then retrieved. Divers measure each abalone before removing from the reef and the deckhand re-measures each abalone and removes excess weed growth from the shell. Since 2002 the Victorian Industry has seen a significant decline in catches, with the total allowable catch (TAC) reduced from 1440 tonnes to 787 tonnes for the 2011/12 fishing year. This is due to dwindling stocks and most notably the abalone virus Ganglioneuritis which is fast spreading and lethal to abalone stocks.
United States.
Sport harvesting of red abalone is permitted with a California fishing license and an abalone stamp card. In 2008, the abalone card also came with a set of 24 tags. This was reduced to 18 abalone per year in 2014, only nine of which may be taken south of Mendocino County. Legal-size abalone must be tagged immediately. Abalone may only be taken using breath-hold techniques or shorepicking; scuba diving for abalone is strictly prohibited. Taking of abalone is not permitted south of the mouth of the San Francisco Bay. There is a size minimum of measured across the shell. A person may be in possession of only three abalone at any given time.
Abalone may only be taken from April to November, not including July. Transportation of abalone may only legally occur while the abalone is still attached in the shell. Sale of sport-obtained abalone is illegal, including the shell. Only red abalone may be taken as black, white, pink, flat, green, and pinto abalone are protected by law.
An abalone diver is normally equipped with a thick wetsuit, including a hood, bootees, and gloves, and usually also a mask, snorkel, weight belt, abalone iron, and abalone gauge. Alternatively, the rock picker can feel underneath rocks at low tides for abalone. Abalone are mostly taken in depths from a few inches up to ; less common are freedivers who can work deeper than . Abalone are normally found on rocks near food sources such as kelp. An abalone iron is used to pry the abalone from the rock before it can fully clamp down. Divers dive out of boats, kayaks, tube floats or directly off the shore.
The largest abalone recorded in California is , caught by John Pepper somewhere off the coast of San Mateo county in September 1993.
The mollusc 'Concholepas concholepas' is often sold in the United States under the name 'Chilean abalone', though it is not an abalone, but a muricid.
New Zealand.
In New Zealand, abalone is called pāua (, from the Māori language). 'Haliotis iris' (or blackfoot pāua) is the ubiquitous New Zealand pāua; the highly polished nacre of which is extremely popular as souvenirs with its striking blue, green, and purple iridescence. 'Haliotis australis' and 'Haliotis virginea' are also found in New Zealand waters, but are less popular than 'H. iris'.
Like all New Zealand shellfish, recreational harvesting of pāua does not require a permit provided catch limits, size restrictions, and seasonal and local restrictions set by the Ministry for Primary Industries (MPI) are followed. The legal recreational daily limit is 10 pāua per diver, with a minimum shell length of for 'Haliotis iris' and for 'Haliotis australis'. In addition, no person may be in possession, even on land, of more than 20 pāua or more than of pāua meat at any one time. Pāua can only be caught by free-diving; it is illegal to catch pāua using scuba gear.
There is an extensive global black market in collecting and exporting abalone meat. This can be a particularly awkward problem where the right to harvest pāua can be granted legally under Māori customary rights. When such permits to harvest are abused, it is frequently difficult to police. The limit is strictly enforced by roving Ministry for Primary Industries fishery officers with the backing of the New Zealand Police. Pāua poaching is a major industry in New Zealand with many thousands being taken illegally, often undersized. Convictions have resulted in seizure of diving gear, boats, and motor vehicles and fines and in rare cases, imprisonment. The Ministry of Fisheries expects in the year 2004/05, nearly 1,000 tons of pāua will be poached, with 75% of that being undersized.
South Africa.
The largest abalone in South Africa, 'Haliotis midae', occurs along approximately two-thirds of the country’s coastline. Abalone-diving has been a recreational activity for many years, but stocks are currently being threatened by illegal commercial harvesting. In South Africa all persons harvesting this shellfish need permits that are issued annually, and no abalone may be harvested using scuba gear.
For the last few years, however, no permits have been issued for collecting abalone, but commercial harvesting still continues as does illegal collection by syndicates.
In 2007, because of widespread poaching of abalone, the South African government listed abalone as an endangered species according to the CITES section III appendix, which requests member governments to monitor the trade in this species. This listing was removed from CITES in June 2010 by the South African government and South African abalone is no longer subject to CITES trade controls. Export permits are still required, however.
The abalone meat from South Africa is prohibited for sale in the country to help reduce poaching; however, much of the illegally harvested meat is sold in Asian countries. As of early 2008, the wholesale price for abalone meat was approximately US$40.00 per kilogram. There is an active trade in the shells, which sell for more than US$1,400 per metric tonne.
Channel Islands.
Ormers ('Haliotis tuberculata') are considered a delicacy in the British Channel Islands as well as in adjacent areas of France, and are pursued with great alacrity by the locals. This has led to a dramatic depletion in numbers since the latter half of the 19th century, and 'ormering' is now strictly regulated in order to preserve stocks. The gathering of ormers is now restricted to a number of 'ormering tides', from January 1 to April 30, which occur on the full or new moon and two days following. No ormers may be taken from the beach that are under in shell length. Gatherers are not allowed to wear wetsuits or even put their heads underwater. Any breach of these laws is a criminal offense and can lead to fine of up to £5,000 or six months in prison. The demand for ormers is such that they led to the world's first underwater arrest, when Mr. Kempthorne-Leigh of Guernsey was arrested by a police officer in full diving gear when illegally diving for ormers.
Decorative items.
The highly iridescent inner nacre layer of the shell of abalone has traditionally been used as a decorative item, in jewelry, buttons, and as inlay in furniture and in musical instruments such as guitars, etc.
Abalone pearl jewelry is very popular in New Zealand and Australia, in no minor part due to the marketing and farming efforts of pearl companies. Unlike the Oriental Natural, the Akoya pearl, and the South Sea and Tahitian cultured pearls, abalone pearls are not primarily judged by their roundness. The inner shell of the abalone is an iridescent swirl of intense colours, ranging from deep cobalt blue and peacock green to purples, creams and pinks. Therefore each pearl, natural or cultured, will have its own unique collage of colours.
The shells of abalone are occasionally used in New Age smudging ceremonies to catch falling ash. They have also been used as incense burners.
Medical.
″Abalone juice″ has been shown to be an effective inhibitor of penicillin-resistant bacteria.
Threat of extinction.
Abalones have been identified as one of the many classes of organism threatened with extinction due to overfishing, acidification of oceans from anthropogenic carbon dioxide, as reduced pH erodes their shells. It is predicted that abalones will become extinct in the wild within 200 years at current rates of carbon dioxide production.
Species.
The number of species that are recognized within the genus 'Haliotis' has fluctuated over time, and depends on the source that is consulted. The number of recognized species range from 30 to 130. This list finds a compromise using the 'WoRMS database', plus some species that have been added, for a total of 57. The majority of abalone have not been rated for conservation status. Those that have been reviewed tend to show that the abalone in general is an animal that is declining in numbers, and will need protection throughout the globe.
References.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1301'>
Abbess
An abbess (Latin 'abbatissa', feminine form of 'abbas,' abbot) is the female superior of a community of nuns, often an abbey.
Description.
In the Catholic Church (both the Latin Church and Eastern Catholic), Eastern Orthodox, Coptic and Anglican abbeys, the mode of election, position, rights, and authority of an abbess correspond generally with those of an abbot. She must be at least 40 years old and have been a nun for 10 years. The office is elective, the choice being by the secret votes of the nuns belonging to the community. Like an abbot, after being confirmed in her office by the Holy See, an abbess is solemnly admitted to her office by a formal blessing, conferred by the bishop in whose territory the monastery is located, or by an abbot or another bishop with appropriate permission. Unlike the abbot, the abbess receives only the ring and a copy of the rule of the order. She does not receive a mitre nor is given a crosier as part of the ceremony; however, by ancient tradition, she may carry a crosier when leading her community. The abbess also traditionally adds a pectoral cross to the outside of her habit as a symbol of office, though she continues to wear a modified form of her religious habit or dress, as she is unordained—not a male religious—and so does not vest or use choir dress in the liturgy.
Roles and responsibilities.
Abbesses are, like abbots, major superiors according to canon law, the equivalents of abbots or bishops (the ordained male members of the church hierarchy who have, by right of their own office, executive jurisdiction over a building, diocesan territory, or a communal or non-communal group of persons—juridical entities under church law). They receive the vows of the nuns of the abbey; they may admit candidates to their order's novitiate; they may send them to study; and they may send them to do pastoral and/or missionary work and/or assist—to the extent allowed by canon and civil law—in the administration and ministry of a parish or diocese (these activities could be inside or outside the community's territory). They have full authority in its administration. However, there are certain limitations: they may not administer the sacraments and related functions whose celebration is reserved to bishops, priests, deacons (the male clergy), namely, Holy Orders (they may make provision for an ordained cleric to help train and to admit some of their members, if needed, as altar servers, Eucharistic ministers, or lectors—the minor ministries which are now open to the non-ordained). They may not fill the clerical role of serving as the Mass celebrant and as a clerical witness to a marriage (they may serve as a non-ordained witness alongside the laity, for example, at a friend's wedding). They may not administer Penance (Reconciliation), Anointing of the Sick (Extreme Unction), or function as an ordained celebrant or concelebrant of the Mass (by virtue of their office and their training and institution, they may act, if the need arises, as altar servers, lectors, ushers, porters, or Eucharistic ministers of the Cup, and if need be, the Host). They may preside over a simple prayer service such as the Liturgy of the Hours which they are obliged to say with their community, speak about Scripture to their community, and give certain types of blessings not reserved to the clergy. On the other hand, they may not preside over Adoration or Benediction, give a speech that is a homily, or read the Gospel during a Mass or serve as instituted acolytes, a ministry which is now reserved for those preparing for ordained service). As they do not receive Holy Orders in the Catholic, Orthodox and Oriental Churches, they do not possess the ability to ordain any religious to Holy Orders, or even admit their members to the non-ordained ministries to which they can be installed by the ordained clergy (females do not serve as clergy anyway, per formal church teaching, in these churches), nor do they exercise the authority they do possess under canon law over any territories outside of their monastery and its territory (though non-cloistered, non-contemplative female religious members who are based in a convent or monastery but who participate in external affairs may assist as needed by the diocesan bishop and local secular clergy and laity, in certain pastoral ministries and administrative and non-administrative functions not requiring ordained ministry or status as a male cleric in those churches or programs).
History.
Historically, in some Celtic monasteries abbesses presided over joint-houses of monks and nuns, the most famous example being Saint Brigid of Kildare's leadership in the founding of the monastery at Kildare in Ireland. This custom accompanied Celtic monastic missions to France, Spain, and even to Rome itself. In 1115, Robert, the founder of Fontevraud Abbey near Chinon and Saumur, France, committed the government of the whole order, men as well as women, to a female superior.
In Lutheran churches, the title of abbess ('Äbtissin') has in some cases (e.g. Itzehoe) survived to designate the heads of abbeys which since the Protestant Reformation have continued as 'Stifte'. These are collegiate foundations, which provide a home and an income for unmarried ladies, generally of noble birth, called canonesses ('Kanonissinen') or more usually 'Stiftsdamen'. The office of abbess is of considerable social dignity, and in the past, was sometimes filled by princesses of the reigning houses. Until the dissolution of Holy Roman Empire and mediatization of smaller imperial fiefs by Napoleon, the evangelical Abbess of Quedlinburg was also per officio the head of that 'reichsunmittelbar' state. The last such ruling abbess was Sofia Albertina, Princess of Sweden.
The Roman Catholic church has around 200 abbesses at present.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1303'>
Abdominal surgery
The term abdominal surgery broadly covers surgical procedures that involve opening the abdomen. Surgery of each abdominal organ is dealt with separately in connection with the description of that organ (see stomach, kidney, liver, etc.) Diseases affecting the abdominal cavity are dealt with generally under their own names (e.g. appendicitis).
Types.
The most common abdominal surgeries are described below.
Complications.
Complications of abdominal surgery include, but are not limited to:
Sterile technique, aseptic post-operative care, antibiotics, and vigilant post-operative monitoring greatly reduce the risk of these complications. Planned surgery performed under sterile conditions is much less risky than that performed under emergency or unsterile conditions. The contents of the bowel are unsterile, and thus leakage of bowel contents, as from trauma, substantially increases the risk of infection.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1304'>
Abduction
Abduction may refer to:
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1305'>
Abensberg
Abensberg () is a town in the Lower Bavarian district of Kelheim, in Bavaria, Germany, lying around 30 km southwest of Regensburg, 40 km east of Ingolstadt, 50 northwest of Landshut and 100 km north of Munich. It is situated on the Abens river, a tributary of the Danube.
Geography.
The town lies on the Abens river, a tributary of the Danube, around eight kilometres from the river's source. The area around Abensberg is characterized by the narrow valley of the Danube, where the Weltenburg Abbey stands, the valley of the Altmühl in the north, a left tributary of the Danube, and the famous Hallertau hops-planting region in the south. The town is divided into the municipalities of Abensberg, Arnhofen, Holzharland, Hörlbach, Offenstetten, Pullach and Sandharland.
Divisions.
Since the administrative reforms in Bavaria in the 1970s, the town also encompasses the following 'Ortsteile':
History.
There had been settlement on this part of the Abens river since long before the High Middle Ages, dating back to Neolithic times. Of particular interest and national importance are the Neolithic flint mines at Arnhofen, where, around 7,000 years ago, Stone Age people made flint, which was fashioned into drills, blades and arrowheads, and was regarded as the steel of the Stone Age. Traces of over 20,000 individuals were found on this site. The modern history of Abensberg, which is often incorrectly compared with that of the 3rd century Roman castra (military outpost) of Abusina, begins with Gebhard, who was the first to mention Abensberg as a town, in the middle of the 12th century. The earliest written reference to the town, under the name of 'Habensperch', came from this time, in around 1138. Gebhard was from the Babonen clan.
In 1256, the castrum of 'Abensprech' was first mentioned, and on 12 June 1348, Ludwig, Margrave of Brandenburg, and his brother, Stephen, Duke of Bavaria, raised Abensberg to the status of a city, giving it the right to operate lower courts, enclose itself with a wall and hold markets. The wall was built by Ulrich III, Count of Abensberg. Some of the thirty-two round towers and eight turrets are still preserved to this day.
In the Middle Ages, the people of Abensberg enjoyed a level of autonomy above their lord. They elected a city council, although only a small number of rich families were eligible for election.
In around 1390, the Carmelite Monastery of Our Lady of Abensberg was founded by Count John II and his wife, Agnes. Although Abensberg was an autonomous city, it remained dependent on the powerful Dukes of Bavaria. The last Lord of Abensberg, Nicholas, supposedly named after his godfather, Nicholas of Kues, a Catholic cardinal, was murdered in 1485 by Christopher, a Duke of Bavaria-Munich. The year before, Nicholas had unchivalrously taken Christopher captive as he bathed before a tournament in Munich. Although Christopher renounced his claim for revenge, he lay in wait for Nicholas in Friesling. When the latter arrived, he was killed by Seitz von Frauenberg. He is buried in the former convent of Abensberg. Abensberg then lost its independence and became a part of the Duchy of Bavaria, and from then on was administered by a ducal official, the so-called caretaker. The castle of Abensberg was destroyed during the Thirty Years' War, although the city had bought a guarantee of protection from the Swedish general, Carl Gustaf Wrangel. Johannes Aventinus (1477–1534) is the city's most famous son, the founder of the study of history in Bavaria. Aventinus, whose name was real name is Johann or Johannes Turmair ('Aventinus' being the Latin name of his birthplace) wrote the 'Annals of Bavaria', a valuable record of the early history of Germany and the first major written work on the subject. He is commemorated in the Walhalla temple, a monument near Regensburg to the distinguished figures of German history. Until 1800, Abensberg was a municipality belonging to the Straubing district of the Electorate of Bavaria. Abensberg also contained a magistrates' court. In the Battle of Abensberg on 19–20 April 1809, Napoleon gained a significant victory over the Austrians under Archduke Louis of Austria and General Johann von Hiller.
Arms.
The arms of the city are divided into two halves. On the left are the blue and white rhombuses of Bavaria, while the right half is split into two silver and black triangles. Two diagonally-crossed silver swords with golden handles rest on top.
The town has had a coat of arms since 1338, that of the Counts of Abensberg. With the death of the last Count, Nicholas of Abensberg, in 1485, the estates fell to the Duchy of Bavaria-Munich, meaning that henceforth only the Bavarian coat of arms was ever used.
On 31 December 1809, a decree of King Maximilian of Bavaria granted the city a new coat of arms, as a recognition of their (mainly humanitarian and logistic) services in the Battle of Abensberg the same year. The diagonally divided field in silver and black came from the old crest of the Counts of Abensberg, while the white and blue diamonds came from that of the House of Wittelsbach, the rulers of Bavaria. The swords recall the Battle of Abensberg.
The district of Offenstetten previously possessed its own coat of arms.
Economy and Infrastructure.
The area around Abensberg, the so-called sand belt between Siegburg, Neustadt an der Donau, Abensberg and Langquaid, is used for the intensive farming of asparagus, due to the optimal soil condition and climate. 212 hectares of land can produce ninety-four asparagus plants. Abensberg asparagus enjoys a reputation among connoisseurs as a particular delicacy. In addition to asparagus, the production of hops plays a major role locally, the region having its own label, and there are still three independent breweries in the area. The town of Abensberg marks the start of the 'Deutsche Hopfenstraße' ('German Hops Road'), a nickname given to the Bundesstraße 301, a German federal highway which runs through the heartland of Germany's hops-growing industry, ending in Freising.
Transport.
The Abensberg railway station is located on the Danube Valley Railway from Regensburg to Ingolstadt. The city can be reached via the A-93 Holledau-Regensburg road (exit Abensberg). Three Bundesstraße (German federal highways) cross south of Abensberg: B 16, B 299 and B 301.
Public facilities.
Schools.
Abensberg has a Grundschule (primary school) and Hauptschule (open admission secondary school), and the Johann-Turmair-Realschule(secondary modern school). There is also a College of Agriculture and Home Economics. Since 2007, the Kelheim Berufsschule has had a campus in Abensberg, and outside the state sector is the St. Francis Vocational Training Centre, run by a Catholic youth organisation.
Culture and sightseeing.
Theatre.
In 2008, a former goods shed by the main railway station of Abensberg was converted into a theatre by local volunteers. The 'Theater am Bahnhof' ('Theatre at the Railway Station') is mostly used by the 'Theatergruppe Lampenfieber' and was opened on 19 October 2008.
Museums.
Abensberg has a long tradition of museums. In the nineteenth century, Nicholas Stark und Peter Paul Dollinger began a collection based on local history. This collection and the collection of the 'Heimatverein' (local history society) were united in 1963 into the Aventinus Museum, in the cloister of the former Carmelite monastery. On 7 July 2006, the new Town Museum of Abensberg was opened in the former duke's castle in the town.
Kuchlbauer Brewery.
Two blocks west of the Old Town is the Kuchlbauer Brewery and beer garden featuring the Kuchlbauer Tower, a colorful and unconventional observation tower designed by Viennese architect Friedensreich Hundertwasser. The brewery and tower are open to the public.
Missing memorial.
Up until the 1950s, Abensberg and the surrounding villages contained a number of graves of victims of a Death March in the Spring of 1945 from the Hersbruck sub-camp of the Dachau concentration camp, who were either murdered by the SS or died of exhaustion. They were originally buried where they died, but were later moved on the orders of the US military government to the cemeteries of their previous homes. At the cemetery in what is now the district of Pullach stood a memorial stone which was mentioned as recently as 1967, but which is no longer at the site. The suffering of ten unknown victims of the camp was recorded on the stone.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1306'>
Arminianism
Arminianism is based on the theological ideas of the Dutch Reformed theologian Jacobus Arminius (1560–1609) and his historic supporters known as the Remonstrants. His teachings held to the five solae of the Reformation, but they were distinct in some ways from particular teachings of Martin Luther, Huldrych Zwingli, John Calvin, and other Protestant Reformers. Jacobus Arminius (Jacobus Hermanszoon) was a student of Beza (successor of Calvin) at the Theological University of Geneva. Arminianism is known as a soteriological diversification of Protestant Christianity.
Dutch Arminianism was originally articulated in the Remonstrance (1610), a theological statement signed by 45 ministers and submitted to the States General of the Netherlands. The Synod of Dort (1618–19) was called by the States General to consider the Five Articles of Remonstrance. These articles asserted that
Many Christian denominations have been influenced by Arminian views on the will of man being freed by grace prior to regeneration, notably the Baptists (See 'A History of the Baptists' Third Edition by Robert G. Torbet) in the 16th century, and the Methodists in the 18th century and the Seventh-day Adventist Church. Some assert that Universalists and Unitarians in the 18th and 19th centuries were theologically linked with Arminianism. Denominations such as the Anabaptists (beginning in 1525), and Waldensians (pre-Reformation), and other groups prior to the Reformation have also affirmed that each person may choose the contingent response of either resisting God's grace or yielding to it.
The original beliefs of Jacobus Arminius himself are commonly defined as Arminianism, but more broadly, the term may embrace the teachings of Hugo Grotius, John Wesley, and others as well. Classical Arminianism, to which Arminius is the main contributor, and Wesleyan Arminianism, to which John Wesley is the main contributor, are the two main schools of thought. Wesleyan Arminianism is often identical with Methodism. Some Arminian schools of thought share certain similarities with Semipelagianism, believing the first step of salvation is by human will but classical Arminianism holds that the first step of salvation is the grace of God. Historically, the Council of Orange (529) condemned semi-Pelagian thought, and is accepted by some as a document which can be understood as teaching a doctrine between Augustinian thought and semi-Pelegian thought, making it similar to Arminianism.
The two systems of Calvinism and Arminianism share both history and many doctrines, and the history of Christian theology. Arminianism is related to Calvinism historically. However, because of their differences over the doctrines of divine predestination and election, many people view these schools of thought as opposed to each other. In short, the difference can be seen ultimately by whether God allows His desire to save all to be resisted by an individual's will (in the Arminian doctrine) or if God's grace is irresistible and limited to only some (in Calvinism). Put another way, is God's sovereignty shown, in part, through His allowance of free decisions? Some Calvinists assert that the Arminian perspective presents a synergistic system of Salvation and therefore is not only by grace, while Arminians firmly reject this conclusion. Many consider the theological differences to be crucial differences in doctrine, while others find them to be relatively minor.
History.
Jacobus Arminius was a Dutch pastor and theologian in the late 16th and early 17th centuries. He was taught by Theodore Beza, Calvin's hand-picked successor, but after examination of the Scriptures, he rejected his teacher's theology that it is God who unconditionally elects some for salvation. Instead Arminius proposed that the election of God was 'of believers', thereby making it conditional on faith. Arminius's views were challenged by the Dutch Calvinists, especially Franciscus Gomarus, but Arminius died before a national synod could occur.
Arminius's followers, not wanting to adopt their leader's name, called themselves the Remonstrants. When Arminius died before he could satisfy Holland's State General's request for a 14-page paper outlining his views, the Remonstrants replied in his stead crafting the Five articles of Remonstrance. After some political maneuvering, the Dutch Calvinists were able to convince Prince Maurice of Nassau to deal with the situation. Maurice systematically removed Arminian magistrates from office and called a national synod at Dordrecht. This Synod of Dort was open primarily to Dutch Calvinists (Arminians were excluded) with Calvinist representatives from other countries, and in 1618 published a condemnation of Arminius and his followers as heretics. Part of this publication was the famous Five points of Calvinism in response to the five articles of Remonstrance.
Arminians across Holland were removed from office, imprisoned, banished, and sworn to silence. Twelve years later Holland officially granted Arminianism protection as a religion, although animosity between Arminians and Calvinists continued.
The debate between Calvin's followers and Arminius's followers is distinctive of post-Reformation church history. The emerging Baptist movement in 17th-century England, for example, was a microcosm of the historic debate between Calvinists and Arminians. The first Baptists–called 'General Baptists' because of their confession of a 'general' or unlimited atonement, were Arminians. The Baptist movement originated with Thomas Helwys, who left his mentor John Smyth (who had moved into shared belief and other distinctives of the Dutch Waterlander Mennonites of Amsterdam) and returned to London to start the first English Baptist Church in 1611. Later General Baptists such as John Griffith, Samuel Loveday, and Thomas Grantham defended a Reformed Arminian theology that reflected more the Arminianism of Arminius than that of the later Remonstrants or the English Arminianism of Arminian Puritans like John Goodwin or Anglican Arminians such as Jeremy Taylor and Henry Hammond. The General Baptists encapsulated their Arminian views in numerous confessions, the most influential of which was the Standard Confession of 1660. In the 1640s the Particular Baptists were formed, diverging strongly from Arminian doctrine and embracing the strong Calvinism of the Presbyterians and Independents. Their robust Calvinism was publicized in such confessions as the London Baptist Confession of 1644 and the Second London Confession of 1689. Interestingly, the London Confession of 1689 was later used by Calvinistic Baptists in America (called the Philadelphia Baptist Confession), whereas the Standard Confession of 1660 was used by the American heirs of the English General Baptists, who soon came to be known as Free Will Baptists.
This same dynamic between Arminianism and Calvinism can be seen in the heated discussions between friends and fellow Methodist ministers John Wesley and George Whitefield. Wesley was a champion of Arminian teachings, defending his soteriology in a periodical titled 'The Arminian' and writing articles such as 'Predestination Calmly Considered'. He defended Arminianism against charges of semi-Pelagianism, holding strongly to beliefs in original sin and total depravity. At the same time, Wesley attacked the determinism that he claimed characterized unconditional election and maintained a belief in the ability to lose salvation. Wesley also clarified the doctrine of prevenient grace and preached the ability of Christians to attain to perfection. While Wesley freely made use of the term 'Arminian,' he did not self-consciously root his soteriology in the theology of Arminius but was highly influenced by 17th-century English Arminianism and thinkers such as John Goodwin, Jeremy Taylor and Henry Hammond of the Anglican 'Holy Living' school, and the Remonstrant Hugo Grotius.
Current landscape.
Advocates of both Arminianism and Calvinism find a home in many Protestant denominations, and sometimes both exist within the same denomination. Faiths leaning at least in part in the Arminian direction include Methodists, Free Will Baptists, Christian Churches and Churches of Christ, General Baptists, the Seventh-day Adventist Church, Church of the Nazarene, The Salvation Army, Conservative Mennonites, Old Order Mennonites, Amish and Charismatics. Denominations leaning in the Calvinist direction are grouped as the Reformed churches and include Particular Baptists, Reformed Baptists, Presbyterians, and Congregationalists. The majority of Southern Baptists, including Billy Graham, accept Arminianism with an exception allowing for a doctrine of perseverance of the saints ('eternal security'). Many see Calvinism as growing in acceptance, and some prominent Reformed Baptists, such as Albert Mohler and Mark Dever, have been pushing for the Southern Baptist Convention to adopt a more Calvinistic orientation (it should be noted, however, that no Baptist church is bound by any resolution adopted by the Southern Baptist Convention). Lutherans espouse a view of salvation and election distinct from both the Calvinist and Arminian schools of soteriology.
The current scholarly support for Arminianism is wide and varied. One particular thrust is a return to the teachings of Arminius. F. Leroy Forlines, Robert Picirilli, Stephen Ashby and Matthew Pinson (see citations) are four of the more prominent supporters. Forlines has referred to this type of Arminianism as 'Classical Arminianism,' while Picirilli, Pinson, and Ashby have termed it 'Reformation Arminianism' or 'Reformed Arminianism.' Through Methodism, Wesley's teachings also inspire a large scholarly following, with vocal proponents including J. Kenneth Grider, Stanley Hauerwas, Thomas Oden, Thomas Jay Oord, and William Willimon.
Recent influence of the New Perspective on Paul movement has also reached Arminianism — primarily through a view of corporate election. The New Perspective scholars propose that the 1st-century Second Temple Judaism understood election primarily as national (Israelites) and racial (Jews), not as individual. Their conclusion is thus that Paul's writings on election should be interpreted in a similar corporate light.
Theology.
Arminian theology usually falls into one of two groups — Classical Arminianism, drawn from the teaching of Jacobus Arminius — and Wesleyan Arminian, drawing primarily from Wesley. Both groups overlap substantially.
Classical Arminianism.
Classical Arminianism (sometimes titled Reformed Arminianism or Reformation Arminianism) is the theological system that was presented by Jacobus Arminius and maintained by some of the Remonstrants; its influence serves as the foundation for all Arminian systems. A list of beliefs is given below:
The Five articles of Remonstrance that Arminius's followers formulated in 1610 state the above beliefs regarding (I) conditional election, (II) unlimited atonement, (III) total depravity, (IV) total depravity and resistible grace, and (V) possibility of apostasy. Note, however, that the fifth article did not completely deny perseverance of the saints; Arminius, himself, said that 'I never taught that a true believer can… fall away from the faith… yet I will not conceal, that there are passages of Scripture which seem to me to wear this aspect; and those answers to them which I have been permitted to see, are not of such as kind as to approve themselves on all points to my understanding.' Further, the text of the Articles of Remonstrance says that no believer can be plucked from Christ's hand, and the matter of falling away, 'loss of salvation' required further study before it could be taught with any certainty.
The core beliefs of Jacobus Arminius and the Remonstrants are summarized as such by theologian Stephen Ashby:
Wesleyan Arminianism.
John Wesley has historically been the most influential advocate for the teachings of Arminian soteriology. Wesley thoroughly agreed with the vast majority of what Arminius himself taught, maintaining strong doctrines of original sin, total depravity, conditional election, prevenient grace, unlimited atonement, and possibly apostasy.
Wesley departs from Classical Arminianism primarily on three issues:
Other variations.
Since the time of Arminius, his name has come to represent a very large variety of beliefs. Some of these beliefs, such as Pelagianism and semi-Pelagianism (see below) are not considered to be within Arminian orthodoxy and are dealt with elsewhere. Some doctrines, however, do adhere to the Arminian foundation and, while minority views, are highlighted below.
Open theism.
The doctrine of open theism states that God is omnipresent, omnipotent, and omniscient, but differs on the nature of the future. Open theists claim that the future is not completely determined (or 'settled') because people have not made their free decisions yet. God therefore knows the future partially in possibilities (human free actions) rather than solely certainties (divinely determined events). As such, open theists resolve the issue of human free will and God's sovereignty by claiming that God is sovereign because he does not ordain each human choice, but rather works in cooperation with his creation to bring about his will. This notion of sovereignty and freedom is foundational to their understanding of love since open theists believe that love is not genuine unless it is freely chosen. The power of choice under this definition has the potential for as much harm as it does good, and open theists see free will as the best answer to the problem of evil. Well-known proponents of this theology are Greg Boyd, Clark Pinnock, Thomas Jay Oord, William Hasker, and John E. Sanders.
Some Arminians, such as professor and theologian Robert Picirilli, reject the doctrine of open theism as a 'deformed Arminianism'. Joseph Dongell stated that 'open theism actually moves beyond classical Arminianism towards process theology.' There are also some Arminians, like Roger Olson, who believe Open theism to be an alternative view that a Christian can have. The majority Arminian view accepts classical theism – the belief that God's power, knowledge, and presence have no external limitations, that is, outside of his divine nature. Most Arminians reconcile human free will with God's sovereignty and foreknowledge by holding three points:
Corporate view of election.
The majority Arminian view is that election is individual and based on God's foreknowledge of faith, but a second perspective deserves mention. These Arminians reject the concept of individual election entirely, preferring to understand the doctrine in corporate terms. According to this corporate election, God never chose individuals to elect to salvation, but rather He chose to elect the believing church to salvation. Dutch Reformed theologian Herman Ridderbos says '[The certainty of salvation] does not rest on the fact that the church belongs to a certain 'number', but that it belongs to Christ, from before the foundation of the world. Fixity does not lie in a hidden decree, therefore, but in corporate unity of the Church with Christ, whom it has come to know in the gospel and has learned to embrace in faith.'
Corporate election draws support from a similar concept of corporate election found in the Old Testament and Jewish law. Indeed most biblical scholarship is in agreement that Judeo-Greco-Roman thought in the 1st century was opposite of the Western world's 'individual first' mantra – it was very collectivist or communitarian in nature. Identity stemmed from membership in a group more than individuality. According to Romans 9–11, supporters claim, Jewish election as the chosen people ceased with their national rejection of Jesus as Messiah. As a result of the new covenant, God's chosen people are now the corporate body of Christ, the church (sometimes called 'spiritual Israel' – see also Covenant theology). Pastor and theologian Dr. Brian Abasciano claims 'What Paul says about Jews, Gentiles, and Christians, whether of their place in God’s plan, or their election, or their salvation, or how they should think or behave, he says from a corporate perspective which views the group as primary and those he speaks about as embedded in the group. These individuals act as members of the group to which they belong, and what happens to them happens by virtue of their membership in the group.'
These scholars also maintain that Jesus was the only human ever elected and that individuals must be 'in Christ' (Eph 1:3–4) through faith to be part of the elect. This was, in fact, Swiss Reformed theologian, Karl Barth's, understanding of the doctrine of election. Joseph Dongell, professor at Asbury Theological Seminary, states 'the most conspicuous feature of Ephesians 1:3–2:10 is the phrase 'in Christ', which occurs twelve times in Ephesians 1:3–4 alone..this means that Jesus Christ himself is the chosen one, the predestined one. Whenever one is incorporated into him by grace through faith, one comes to share in Jesus' special status as chosen of God.' Markus Barth illustrates the inter-connectedness: 'Election in Christ must be understood as the election of God's people. Only as members of that community do individuals share in the benefits of God's gracious choice.'
Arminianism and other views.
Understanding Arminianism is aided by understanding the theological alternatives: Pelagianism, Semi-Pelagianism, Lutheranism, and Calvinism. Arminianism, like any major belief system, is frequently misunderstood both by critics and would-be supporters.
Comparison among Protestants.
Arminian beliefs compared to other Protestants.
Common misconceptions.
Many Calvinist critics of Arminianism, both historically and currently, claim that Arminianism condones, accepts, or even explicitly supports Pelagianism or Semi-Pelagianism. Arminius referred to Pelagianism as 'the grand falsehood' and stated that he 'must confess that I detest, from my heart, the consequences [of that theology].' David Pawson, a British pastor, decries this association as 'libelous' when attributed to Arminius' or Wesley's doctrine. Indeed most Arminians reject all accusations of Pelagianism; nonetheless, primarily due to Calvinist opponents, the two terms remain intertwined in popular usage.
Comparison with Calvinism.
Ever since Arminius and his followers revolted against Calvinism in the early 17th century, Protestant soteriology has been largely divided between Calvinism and Arminianism. The extreme of Calvinism is hyper-Calvinism, which insists that signs of election must be sought before evangelization of the unregenerate takes place and that the eternally damned have no obligation to repent and believe, and on the extreme of Arminianism is Pelagianism, which rejects the doctrine of original sin on grounds of moral accountability; but the overwhelming majority of Protestant, evangelical pastors and theologians hold to one of these two systems or somewhere in between.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1307'>
The Alan Parsons Project
The Alan Parsons Project was a British progressive rock band, active between 1975 and 1990, consisting of Eric Woolfson and Alan Parsons surrounded by a varying number of session musicians and some relatively consistent band members such as guitarist Ian Bairnson, bassist and vocalist David Paton, and vocalist Lenny Zakatek.
Behind the revolving line-up and the regular sidemen, the true core of the Project was the duo of Parsons and Woolfson. Woolfson was a songwriter by profession, but also a composer and pianist. Parsons was a successful producer and accomplished engineer. Almost all songs on the band's albums are credited to 'Woolfson/Parsons'.
History.
Alan Parsons met Eric Woolfson in the canteen of Abbey Road Studios in the summer of 1974. Parsons had already acted as assistant engineer on the Beatles' 'Abbey Road' and 'Let It Be', had recently engineered Pink Floyd's 'The Dark Side of the Moon', and had produced several acts for EMI Records. Woolfson, a songwriter and composer, was working as a session pianist; he had also composed material for a concept album idea based on the work of Edgar Allan Poe.
When Parsons asked Woolfson to become his manager, he accepted and subsequently managed Parsons' career as a producer and engineer through a string of successes, including Pilot, Steve Harley, Cockney Rebel, John Miles, Al Stewart, Ambrosia and The Hollies. Parsons commented at the time that he felt frustrated in having to accommodate the views of some of the musicians, which he felt interfered with his production. Woolfson came up with the idea of making an album based on developments in the film industry, where directors such as Alfred Hitchcock and Stanley Kubrick were the focal point of the film's promotion, rather than individual film stars. If the film industry was becoming a director's medium, Woolfson felt the music business might well become a producer's medium.
Recalling his earlier Edgar Allan Poe material, Woolfson saw a way to combine his and Parsons' respective talents. Parsons would produce and engineer songs written by the two, and the Alan Parsons Project was born. Their first album, 'Tales of Mystery and Imagination', including major contributions by all members of Pilot and Ambrosia, was a success, reaching the Top 40 in the US 'Billboard' 200 chart. The song 'The Raven' featured lead vocals by the actor Leonard Whiting, and, according to the 2007 remastered album liner notes, was the first rock song to use a digital vocoder, with Alan Parsons speaking lyrics through it.
Arista Records then signed The Alan Parsons Project for further albums. Through the late 1970s and early 1980s, the group's popularity continued to grow (although they were always more popular in North America and Continental Europe than in their home country, never achieving a UK Top 40 single or Top 20 album). The singles 'I Wouldn't Want to Be Like You', 'Games People Play', 'Damned If I Do', 'Time' (Woolfson's first lead vocal), 'Eye in the Sky' and 'Don't Answer Me' had a notable impact on the 'Billboard' Hot 100. After those successes, however, the group began to fade from view. There were fewer hit singles, and declining album sales. 1987's 'Gaudi' was the Project's last release, though they planned to record an album called 'Freudiana' next.
Although the studio version of 'Freudiana' was produced by Parsons (and featured the regular Project backing musicians, making it an 'unofficial' Project album), it was primarily Woolfson's idea to turn it into a musical. This eventually led to a rift between the two artists. While Parsons pursued his own solo career and took many members of the Project on the road for the first time in a successful worldwide tour, Woolfson went on to produce musical plays influenced by the Project's music. 'Freudiana', 'Gaudi' and 'Gambler' were three musicals that included some Project songs like 'Eye in the Sky', 'Time', 'Inside Looking Out', and 'Limelight'. The live music from 'Gambler' was only distributed at the performance site in Mönchengladbach, Germany.
In 1981, Parsons, Woolfson and their record label Arista, were stalled in contract renegotiations when on March 5, the two submitted an all-instrumental album tentatively titled 'The Sicilian Defence' (the name of an aggressive opening move in chess), arguably to get out of their recording contract. Arista's refusal to release the album had two known effects: the negotiations led to a renewed contract and the album was not released at that time.
In interviews made before his 2009 death, Woolfson said he planned to release one track from the 'Sicilian' album, which in 2008 appeared as a bonus track on a CD re-issue of the 'Eve' album. Sometime later, Alan had changed his mind about the album, and announced that it would finally be released on an upcoming Project box set called 'The Complete Albums Collection' in 2014 for the first time as a bonus disc.
Parsons released titles under his name ('Try Anything Once', 'On Air', 'The Time Machine', and 'A Valid Path'), while Woolfson made concept albums named 'Freudiana' (about Sigmund Freud's work on psychology) and ' (continuing from the Alan Parsons Project's first album about Edgar Allan Poe's literature).
'Tales of Mystery and Imagination' was first remixed in 1987 for release on CD, and included narration by Orson Welles which had been recorded in 1975, but arrived too late to be included on the original album. On the 2007 deluxe edition release, it is revealed that parts of this tape were used for the 1976 Griffith Park Planetarium launch of the original album, the 1987 remix, and various radio spots, all of which were included as bonus material.
Sound.
Most of the Project's titles, especially the early work, share common traits (likely influenced by Pink Floyd's 'The Dark Side of the Moon', on which Parsons was the audio engineer in 1973). They were concept albums, and typically began with an instrumental introduction which faded into the first song, often had an instrumental piece in the middle of the second LP side, and concluded with a quiet, melancholic, or powerful song. The opening instrumental was largely done away with by 1980; no later Project album except 'Eye in the Sky' featured one (although every album includes at least one instrumental somewhere in the running order). The instrumental on that album, 'Sirius', eventually became the best-known (or at least most frequently heard) Parsons instrumental. It was used as entrance music by various American sports teams, most notably by the Chicago Bulls during their 1990s NBA dynasty. It was also used as the entrance theme for Ricky Steamboat in pro wrestling of the mid-1980s. In addition, Sirius has been played in a variety of TV shows and movies including the episode 'Vanishing Act' of ', and the 2009 film 'Cloudy with a Chance of Meatballs'.
The group was notable for using several vocal performers instead of having a single lead vocalist. Lead vocal duties were shared by guest vocalists chosen by their vocal style to complement each song. In later years, Woolfson sang lead on many of the group's hits (including 'Time', 'Eye in the Sky' and 'Don't Answer Me'); however, he did not sing any lead vocals on the band's first four albums, and appeared as vocalist only sporadically thereafter. When the Woolfson-sung songs became significant hits, the record company pressured Parsons to use him more, but Parsons preferred 'real' singers, which Woolfson admitted he was not. In addition to Woolfson, Chris Rainbow, Lenny Zakatek, John Miles, David Paton and The Zombies' Colin Blunstone made regular appearances. Other singers, such as Arthur Brown, Procol Harum's Gary Brooker, Dave Terry aka Elmer Gantry, Vitamin Z's Geoff Barradale and Marmalade's Dean Ford, have recorded only once or twice with the Project. Parsons himself only sang lead on one song ('The Raven') through a vocoder, and can be heard singing backing vocals on a few others, including 'To One in Paradise'. Both of those songs appeared on 'Tales of Mystery and Imagination'.
Although the vocalists varied, a small number of musicians worked with the Alan Parsons Project regularly. These core musicians contributed to the recognisable style of a Project song in spite of the varied singer line-up. Together with Parsons and Woolfson, the Project originally consisted of the group Pilot, with Ian Bairnson (guitar), David Paton (bass) and Stuart Tosh (drums). Pilot's keyboardist Billy Lyall also contributed. From 'Pyramid' onwards, Tosh was replaced by Stuart Elliott of Cockney Rebel. Bairnson played on all albums and Paton stayed almost until the end. Andrew Powell appeared as arranger of orchestra (and often choirs) on all albums except 'Vulture Culture', when he was composing the score of Richard Donner's film 'Ladyhawke'. This score was partly in the Project style, recorded by most of the Project regulars, and produced and engineered by Parsons. Powell also composed some material for the first two Project albums. From 'Vulture Culture' onwards, Richard Cottle played as a regular member on synthesizers and saxophone.
Except for one occasion, the Project never played live during its original incarnation. This was because Woolfson and Parsons saw themselves mainly in the roles of writing and production, and also because of the technical difficulties of reproducing on stage the complex instrumentation used in the studio. In the 1990s things changed with the technology of digital samplers. The one occasion where the band was introduced as 'The Alan Parsons Project' in a live performance was at Night of the Proms 1990 (at the time of the group's break-up), featuring all Project regulars except Woolfson who was present but behind the scenes, while Parsons stayed at the mixer except during the last song, where he played acoustic guitar.
Since 1993, a new version of the band has toured, with Parsons performing live acoustic guitar, keyboards and vocals, with various line-ups. This latest incarnation was called Alan Parsons, eventually renaming as the Alan Parsons Live Project, the name distinct from 'The Alan Parsons Project', due to founder Parsons' break-up with Woolfson.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1309'>
Almost all
In mathematics, the phrase 'almost all' has a number of specialised uses.
'Almost all' is sometimes used synonymously with 'all but [except] finitely many' (formally, a cofinite set) or 'all but a countable set' (formally, a cocountable set); see almost.
A simple example is that almost all prime numbers are odd, which is based on the fact that all but one prime number are odd. (The exception is the number 2, which is prime but not odd.)
When speaking about the reals, sometimes it means 'all reals but a set of Lebesgue measure zero' (formally, almost everywhere). In this sense almost all reals are not a member of the Cantor set even though the Cantor set is uncountable.
In number theory, if 'P'('n') is a property of positive integers, and if 'p'('N') denotes the number of positive integers 'n' less than 'N' for which 'P'('n') holds, and if
(see limit), then we say that 'P'('n') holds for almost all positive integers 'n' (formally, asymptotically almost surely) and write
For example, the prime number theorem states that the number of prime numbers less than or equal to 'N' is asymptotically equal to 'N'/ln 'N'. Therefore the proportion of prime integers is roughly 1/ln 'N', which tends to 0. Thus, 'almost all' positive integers are composite (not prime), however there are still an infinite number of primes.
Occasionally, 'almost all' is used in the sense of 'almost everywhere' in measure theory, or in the closely related sense of 'almost surely' in probability theory.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1313'>
Aromatic hydrocarbon
An aromatic hydrocarbon or arene (or sometimes aryl hydrocarbon) is a hydrocarbon with alternating double and single bonds between carbon atoms forming rings. The term 'aromatic' was assigned before the physical mechanism determining aromaticity was discovered; the term was coined as such simply because many of the compounds have a sweet or pleasant odor. The configuration of six carbon atoms in aromatic compounds is known as a benzene ring, after the simplest possible such hydrocarbon, benzene. Aromatic hydrocarbons can be 'monocyclic' (MAH) or 'polycyclic' (PAH).
Some non-benzene-based compounds called heteroarenes, which follows Hückel's rule (for monocyclic rings: when the number of its π-electrons equals 4n+2), are also called as aromatic compounds. In these compounds, at least one carbon atom is replaced by one of the heteroatoms oxygen, nitrogen, or sulfur. Examples of non-benzene compounds with aromatic properties are furan, a heterocyclic compound with a five-membered ring that includes an oxygen atom, and pyridine, a heterocyclic compound with a six-membered ring containing one nitrogen atom.
Benzene ring model.
Benzene, C6H6, is the simplest aromatic hydrocarbon, and it was the first one recognized. The nature of its bonding was first recognized by August Kekulé in the 19th century.
Each carbon atom in the hexagonal cycle has four electrons to share. One goes to the hydrogen atom, and one each to the two neighboring carbons. This leaves one to share with one of its two neighboring carbon atoms, which is why the benzene molecule is drawn with alternating single and double bonds around the hexagon.
The structure is also illustrated as a circle around the inside of the ring to show six electrons floating around in delocalized molecular orbitals the size of the ring itself. This also represents the equivalent nature of the six carbon-carbon bonds all of bond order ~1.5. This equivalency is well explained by resonance forms. The electrons are visualized as floating above and below the ring with the electromagnetic fields they generate acting to keep the ring flat.
General properties:
The circle symbol for aromaticity was introduced by Sir Robert Robinson and his student James Armit in 1925 and popularized starting in 1959 by the Morrison & Boyd textbook on organic chemistry. The proper use of the symbol is debated; it is used to describe any cyclic pi system in some publications, or only those pi systems that obey Hückel's rule on others. Jensen argues that in line with Robinson's original proposal, the use of the circle symbol should be limited to monocyclic 6 pi-electron systems. In this way the circle symbol for a 6c–6e bond can be compared to the Y symbol for a 3c–2e bond.
Arene synthesis.
A reaction that forms an arene compound from an unsaturated or partially unsaturated cyclic precursor is simply called an aromatization. Many laboratory methods exist for the organic synthesis of arenes from non-arene precursors. Many methods rely on cycloaddition reactions. Alkyne trimerization describes the [2+2+2] cyclization of three alkynes, in the Dötz reaction an alkyne, carbon monoxide and a chromium carbene complex are the reactants.Diels-Alder reactions of alkynes with pyrone or cyclopentadienone with expulsion of carbon dioxide or carbon monoxide also form arene compounds. In Bergman cyclization the reactants are an enyne plus a hydrogen donor.
Another set of methods is the aromatization of cyclohexanes and other aliphatic rings: reagents are catalysts used in hydrogenation such as platinum, palladium and nickel (reverse hydrogenation), quinones and the elements sulfur and selenium.
Arene reactions.
Arenes are reactants in many organic reactions.
Aromatic substitution.
In aromatic substitution one substituent on the arene ring, usually hydrogen, is replaced by another substituent. The two main types are electrophilic aromatic substitution when the active reagent is an electrophile and nucleophilic aromatic substitution when the reagent is a nucleophile. In radical-nucleophilic aromatic substitution the active reagent is a radical. An example of electrophilic aromatic substitution is the nitration of salicylic acid:
Coupling reactions.
In coupling reactions a metal catalyses a coupling between two formal radical fragments. Common coupling reactions with arenes result in the formation of new carbon–carbon bonds e.g., alkylarenes, vinyl arenes, biraryls, new carbon–nitrogen bonds (anilines) or new carbon–oxygen bonds (aryloxy compounds). An example is the direct arylation of perfluorobenzenes
Hydrogenation.
Hydrogenation of arenes create saturated rings. The compound 1-naphthol is completely reduced to a mixture of decalin-ol isomers.
The compound resorcinol, hydrogenated with Raney nickel in presence of aqueous sodium hydroxide forms an enolate which is alkylated with methyl iodide to '2-methyl-1,3-cyclohexandione':
Cycloadditions.
Cycloaddition reaction are not common. Unusual thermal Diels-Alder reactivity of arenes can be found in the Wagner-Jauregg reaction. Other photochemical cycloaddition reactions with alkenes occur through excimers.
Benzene and derivatives of benzene.
Benzene derivatives have from one to six substituents attached to the central benzene core. Examples of benzene compounds with just one substituent are phenol, which carries a hydroxyl group, and toluene with a methyl group. When there is more than one substituent present on the ring, their spatial relationship becomes important for which the arene substitution patterns 'ortho', 'meta', and 'para' are devised. For example, three isomers exist for cresol because the methyl group and the hydroxyl group can be placed next to each other (ortho), one position removed from each other (meta), or two positions removed from each other (para). Xylenol has two methyl groups in addition to the hydroxyl group, and, for this structure, 6 isomers exist.
The arene ring has an ability to stabilize charges. This is seen in, for example, phenol (C6H5-OH), which is acidic at the hydroxyl (OH), since a charge on this oxygen (alkoxide -O–) is partially delocalized into the benzene ring.
Polycyclic aromatic hydrocarbons.
Polycyclic aromatic hydrocarbons (PAHs) are aromatic hydrocarbons that consist of fused aromatic rings and do not contain heteroatoms or carry substituents. Naphthalene is the simplest example of a PAH. PAHs occur in oil, coal, and tar deposits, and are produced as byproducts of fuel burning (whether fossil fuel or biomass). As pollutants, they are of concern because some compounds have been identified as carcinogenic, mutagenic, and teratogenic. PAHs are also found in cooked foods. Studies have shown that high levels of PAHs are found, for example, in meat cooked at high temperatures such as grilling or barbecuing, and in smoked fish.
They are also found in the interstellar medium, in comets, and in meteorites and are a candidate molecule to act as a basis for the earliest forms of life. In graphene the PAH motif is extended to large 2D sheets.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1315'>
Abbey
An abbey (from Latin 'abbatia,' derived from Latin language 'abbatia,' from Latin 'abbās', derived from Aramaic language 'abba,' 'father') has historically been a Catholic or, more recently, Anglican, monastery or convent, under the authority of an Abbot or an Abbess, who serves as the spiritual father or mother of the community.
The term can also refer to an establishment which has long ceased to function as an abbey, in some cases for centuries (for example, see Westminster Abbey below). There were many orders that had their own styles of abbeys. Among these were the primary orders such as Benedictine, Cistercian, Carthusian. However there were also the minor order such as the Dominicans, Franciscans, and Carmelites.
Origins.
The formation of communities dates from pre-Christian times, as witness the Essenes; but the earliest Christian monastic foundations of which there is definite knowledge were simply groups of huts without any orderly arrangement, erected about the abode of some solitary famous for holiness and asceticism, around whom had gathered a knot of disciples anxious to learn his doctrine and to imitate his way of life.
In the earliest age of Christian monasticism the ascetics were accustomed to live singly, independent of one another, not far from some village church, supporting themselves by the labour of their own hands, and distributing the surplus after the supply of their own scanty wants to the poor. Increasing religious fervour, aided by persecution, drove them farther and farther away from the civilization into mountain solitudes or lonely deserts. The deserts of Egypt swarmed with the 'cells' or huts of these anchorites. Anthony the Great, who had retired to the Egyptian Thebaid during the persecution of Maximian, AD 312, was the most celebrated among them for his austerities, his sanctity, and his power as an exorcist. His fame collected round him a host of followers imitating his asceticism in an attempt to imitate his sanctity. The deeper he withdrew into the wilderness, the more numerous his disciples became. They refused to be separated from him, and built their cells round that of their spiritual father. Thus arose the first monastic community, consisting of anchorites living each in his own little dwelling, united together under one superior. Anthony, as Johann August Wilhelm Neander remarks, 'without any conscious design of his own, had become the founder of a new mode of living in common, Coenobitism.'
Pachomius.
At Tabennae on the Nile, in Upper Egypt, however, St. Pachomius laid the foundations of the coenobitical life, arranging everything in an organized manner. He built several monasteries, each containing about 1,600 separate cells laid out in lines as an encampment, where the monks slept and performed some of their manual tasks; but there were large halls for their common needs, as the church, refectory, kitchen, even an infirmary and a guest-house. An enclosure protecting all these buildings gave the settlement the appearance of a walled village. It was this arrangement of monasteries, inaugurated by St. Pachomius, which finally spread throughout Palestine, and received the name of laurae, that is 'lanes' or 'alleys.' In addition to these congregations of solitaries, all living in huts apart, there were caenobia, monasteries wherein the inmates lived a common life, none of them being permitted to retire to the cells of a laurae before they had therein undergone a lengthy period of training. In time this form of common life superseded that of the older laurae.
Palladius, who visited the Egyptian monasteries about the close of the 4th century, found among the 300 members of the coenobium of Panopolis, under the Pachomian rule, 15 tailors, 7 smiths, 4 carpenters, 12 cameldrivers and 15 tanners. Each separate community had its own oeconomus or steward, who was subject to a chief steward stationed at the head establishment. All the produce of the monks' labour was committed to him, and by him shipped to Alexandria. The money raised by the sale was expended in the purchase of stores for the support of the communities, and what was over was devoted to charity. Twice in the year the superiors of the several coenobia met at the chief monastery, under the presidency of an archimandrite ('the chief of the fold,' from 'miandra', a sheepfold), and at the last meeting gave in reports of their administration for the year. Details concerning the coenobia in the vicinity of Antioch are found in the writings of Chrysostom. The monks lived in separate huts, 'kalbbia,' forming a religious hamlet on the mountain side. They were subject to an abbot, and observed a common rule.
Great Lavra, Mount Athos.
The necessity for defence from attacks (for monastic houses tended to accumulate rich gifts), economy of space and convenience of access from one part of the community to another, by degrees dictated a more compact and orderly arrangement of the buildings of a monastic coenobium. Large piles of building were erected, with strong outside walls, capable of resisting the assaults of an enemy, within which all the necessary edifices were ranged round one or more open courts, usually surrounded with cloisters. The usual Eastern arrangement is exemplified in the plan of the convent of the Great Lavra, Mount Athos.
This monastery, like the oriental monasteries generally, is surrounded by a strong and lofty blank stone wall, enclosing an area of between 3 and 4 acres (12,000 and 16,000 m²). The longer side extends to a length of about . There is only one main entrance, on the north side (A), defended by three separate iron doors. Near the entrance is a large tower (M), a constant feature in the monasteries of the Levant. There is a small postern gate at L. The enceinte comprises two large open courts, surrounded with buildings connected with cloister galleries of wood or stone. The outer court, which is much the larger, contains the granaries and storehouses (K), and the kitchen (H) and other offices connected with the refectory (G). Immediately adjacent to the gateway is a two-storied guest-house, opening from a cloister (C). The inner court is surrounded by a cloister (EE), from which open the monks' cells (II). In the centre of this court stands the katholikon or conventual church, a square building with an apse of the cruciform domical Byzantine type, approached by a domed narthex. In front of the church stands a marble fountain (F), covered by a dome supported on columns. Opening from the western side of the cloister, but actually standing in the outer court, is the refectory (G), a large cruciform building, about each way, decorated within with frescoes of saints. At the upper end is a semicircular recess, recalling the triclinium of the Lateran Palace at Rome, in which is placed the seat of the hegumenos or abbot. This apartment is chiefly used as a hall of meeting, the oriental monks usually taking their meals in their separate cells.
Coptic monastery.
The plan of a Coptic monastery, from Lenoir, shows a church of three aisles, with cellular apses, and two ranges of cells on either side of an oblong gallery.
Benedictine monasteries.
Monasticism in the West owes its extension and development to Benedict of Nursia (born AD 480). His rule was diffused rapidly from the parent foundation on Monte Cassino, the first abbey (529), through the whole of western Europe, and every country witnessed the erection of monasteries far exceeding anything that had yet been seen in spaciousness and splendour. Few great towns in Italy were without their Benedictine convent, and they quickly rose in all the great centres of population in England, France and Spain. Many monasteries were founded between AD 520 and 700. Before the Council of Constance, AD 1415, no fewer than 15,070 abbeys had been established of this order alone. No special plan was adopted or followed in the building of the first caenobia. The monks simply copied the buildings familiar to them, the Roman house or villa, whose plan, throughout the extent of the Roman Empire, was practically uniform. The founders of monasteries had often merely to install a community in an already existing villa. When they had to build, the natural instinct was to copy old models. If they fixed upon a site with existing buildings in good repair, they simply adapted them to their requirements, as St. Benedict did at Monte Cassino. The spread of the monastic life gradually effected great changes in the model of the Roman villa. The various avocations followed by the monks required suitable buildings, which were at first erected not upon any premeditated plan, but just as the need for them arose. These requirements, however, being practically the same in every country, resulted in practically similar arrangements everywhere. The buildings of a Benedictine abbey were uniformly arranged after one plan, modified where necessary to accommodate the arrangement to local circumstances.
The plan of the great Abbey of Saint Gall, erected about AD 719, indicates the general arrangement of a monastery of the first class towards the early part of the 9th century. According to architect Robert Willis, the general appearance of the convent is that of a town of isolated houses with streets running between them. It was planned in compliance with the Benedictine rule, which enjoined that, if possible, the monastery should contain every necessity of life. It should comprise a mill, a bakehouse, stables, and cow-houses, so that the monks had no need to go outside.
The general distribution of the buildings may be thus described:-The church, with its cloister to the south, occupies the centre of a quadrangular area, about square. The buildings, as in all great monasteries, are distributed into groups. The church forms the nucleus, as the centre of the religious life of the community. In closest connection with the church is the group of buildings appropriated to the monastic line and its daily requirements---the refectory for eating, the dormitory for sleeping, the common room for social intercourse, the chapter-house for religious and disciplinary conference. These essential elements of monastic life are ranged about a cloister court, surrounded by a covered arcade, affording communication sheltered from the elements between the various buildings. The infirmary for sick monks, with the physician's house and physic garden, lies to the east. In the same group with the infirmary is the school for the novices. The outer school, with its headmaster's house against the opposite wall of the church, stands outside the convent enclosure, in close proximity to the abbot's house, that he might have a constant eye over them. The buildings devoted to hospitality are divided into three groups,--one for the reception of distinguished guests, another for monks visiting the monastery, a third for poor travellers and pilgrims. The first and third are placed to the right and left of the common entrance of the monastery,---the hospitium for distinguished guests being placed on the north side of the church, not far from the abbot's house; that for the poor on the south side next to the farm buildings. The monks are lodged in a guest-house built against the north wall of the church. The group of buildings connected with the material wants of the establishment is placed to the south and west of the church, and is distinctly separated from the monastic buildings. The kitchen, buttery and offices are reached by a passage from the west end of the refectory, and are connected with the bakehouse and brewhouse, which are placed still farther away. The whole of the southern and western sides is devoted to workshops, stables and farm-buildings. The buildings, with some exceptions, seem to have been of one story only, and all but the church were probably erected of wood. The whole includes thirty-three separate blocks. The church (D) is cruciform, with a nave of nine bays, and a semicircular apse at either extremity. That to the west is surrounded by a semicircular colonnade, leaving an open 'paradise' (E) between it and the wall of the church. The whole area is divided by screens into various chapels. The high altar (A) stands immediately to the east of the transept, or ritual choir; the altar of Saint Paul (B) in the eastern, and that of St Peter (C) in the western apse. A cylindrical campanile stands detached from the church on either side of the western apse (FF).
The 'cloister court', (G) on the south side of the nave of the
church has on its east side the 'pisalis' or 'calefactory', (H), the common sitting-room of the brethren, warmed by flues beneath the floor. On this side in later monasteries we invariably find the chapter house. It appears, however, from the inscriptions on the plan itself, that the north walk of the cloisters served for the purposes of a chapter-house, and was fitted up with benches on the long sides. Above the calefactory is the 'dormitory' opening into the south transept of the church, to enable the monks to attend the nocturnal services with readiness, via the day-stair which lead to a cloister first or a night-stair which lead directly to the church. A passage at the other end leads to the 'necessarium' (I). The southern side is occupied by the 'refectory' (K), from the west end of which by a vestibule the kitchen (L) is reached. This is separated from the main buildings of the monastery, and is connected by a long passage with a building containing the bake house and brew house (M), and the sleeping-rooms of the servants. The upper story of the refectory is the 'vestiarium,' where the ordinary clothes of the brethren were kept. On the western side of the cloister is another two-story building (N). The cellar is below, and the larder and store-room above. Between this building and the church, opening by one door into the cloisters, and by another to the outer part of the monastery area, is the 'parlour' for interviews with visitors from the external world (O). On the eastern side of the north transept is the 'scriptorium' or writing-room (P1), with the library above.
To the east of the church stands a group of buildings comprising two miniature conventual establishments, each complete in itself. Each has a covered cloister surrounded by the usual buildings, i.e. refectory, dormitory, etc., and a church or chapel on one side, placed back to back. A detached building belonging to each contains a bath and a kitchen. One of these diminutive convents is appropriated to the 'oblati' or novices (Q), the other to the sick monks as an 'infirmary' (R).
The 'residence of the physicians' (S) stands contiguous to the infirmary, and the physic garden (T) at the north-east corner of the monastery. Besides other rooms, it contains a drug store, and a chamber for those who are dangerously ill. The 'house for bloodletting and purging' adjoins it on the west (U).
The 'outer school,' to the north of the convent area, contains a large schoolroom divided across the middle by a screen or partition, and surrounded by fourteen little rooms, termed the dwellings of the scholars. The head-master's house (W) is opposite, built against the side wall of the church. The two 'hospitia' or guest-houses for the entertainment of strangers of different degrees (X1 X2) comprise a large common chamber or refectory in the centre, surrounded by sleeping-apartments. Each is provided with its own brewhouse and bakehouse, and that for travelers of a superior order has a kitchen and storeroom, with bedrooms for their servants and stables for their horses. There is also an 'hospitium' for strange monks, abutting on the north wall of the church (Y).
Beyond the cloister, at the extreme verge of the convent area to the south, stands the 'factory' (Z), containing workshops for shoemakers, saddlers (or shoemakers, sellarii), cutlers and grinders, trencher-makers, tanners, curriers, fullers, smiths and goldsmiths, with their dwellings in the rear. On this side we also find the farm buildings, the large granary and threshing-floor (a), mills (c), malthouse (d). Facing the west are the stables (e), ox-sheds (f), goatstables (gl, piggeries (h), sheep-folds
(i), together with the servants' and labourers' quarters (k). At the south-east corner we find the hen and duck house, and poultry-yard (m), and the dwelling of the keeper (n). Hard by is the kitchen garden (o), the beds bearing the names of the vegetables growing in them, onions, garlic, celery, lettuces, poppy, carrots, cabbages, etc., eighteen in all. In the same way the physic garden presents the names of the medicinal herbs, and the cemetery (p) those of the trees, apple, pear, plum, quince, etc., planted there.
Many of the present grand cathedrals were originally benedictine monasteries or abbeys. These were converted by Henry VIII and contain cloisters, chapter houses, and other abbatial buildings. Some of these are Cantebury, Chester, Durham, Ely, Gloucester, Norwich, Peterborough, Rochester, Winchester, and Worcester.
Every large monastery had depending upon it smaller foundations known as cells or priories. Sometimes these foundations were no more than a single building serving as residence and farm offices, while other examples were miniature monasteries for 5 or 10 monks. The outlying farming establishments belonging to the monastic foundations were known as villae or granges. They were usually staffed by lay-brothers, sometimes under the supervision of a single monk.
Westminster Abbey.
Westminster Abbey was founded in the 10th century by St. Dunstan and it shows hints of French architecture in its designs. It is another example of a great Benedictine abbey, identical in its general arrangements, so far as they can be traced, with those described above.
The only traces of Dunstan's monastery to be seen today are in the round arches and massive supporting columns of the undercroft and the Pyx Chamber in the cloisters. The cloister and monastic buildings lie to the south side of the church. Parallel to the nave, on the south side of the cloister, was the refectory, with its lavatory at the door.
On the eastern side there are remains of the dormitory, raised on a vaulted substructure and communicating with the south transept. The chapter-house opens out of the same alley of the cloister. The small cloister lay to the south-east of the larger cloister, and still farther to the east we have the remains of the infirmary with the table hall, the refectory of those who were able to leave their chambers. The abbot's house formed a small courtyard at the west entrance, close to the inner gateway.
St. Mary's Abbey, York.
St Mary's Abbey, York, the largest and richest Benedictine establishment in the north of England, was first founded in 1055.
It exhibited the usual Benedictine arrangements. The entrance was by a strong gateway to the north. Close to the entrance was a chapel, where is now the church of St Olaf, in which the new-comers paid their devotions immediately on their arrival. Near the gate to the south was the guest-hall or hospitium. The buildings are completely ruined, but the walls of the nave and the cloisters, are still visible on the grounds of Yorkshire Museum. The precincts were surrounded by a strong fortified wall on three sides, the river Ouse being sufficient protection on the fourth side. The stone walls still exist and are one of the best surviving examples of abbey walls which remain in the country.
Abbey of Cluny.
The Abbey of Cluny was founded by William I, Duke of Aquitaine in 910, and was noted for its strict observance of the Rule of St. Benedict. The Abbey was built in the Romanesque style.
Reforms adopted at Cluny resulted in many departures from precedent, chief among which was a highly centralized form of government entirely foreign to Benedictine tradition. The reform quickly spread beyond the limits of the Abbey of Cluny, partly by the founding of new houses and partly by the incorporation of those already existing. By the twelfth century Cluny was at the head of an order consisting of some 314 monasteries.
The abbey-church of Cluny was on a scale commensurate with the greatness of the congregation, and was regarded as one of the wonders of the Middle Ages. It was no less than 555 feet in length, and was the largest church in Christendom until the erection of St. Peter's at Rome. It consisted of five naves, a narthex, or ante-church, and several towers. Commenced by St. Hugh, the sixth abbot, in 1089, it was finished and consecrated by Pope Innocent II in 1131-32, the narthex being added in 1220. Together with the conventual buildings it covered an area of twenty-five acres. At the suppression in 1790 it was bought by the town and almost entirely destroyed.
English Cluniac houses.
The first English house of the Cluniac order was that of Lewes, founded by the William de Warenne, 1st Earl of Surrey, c. AD 1077. All Cluniac houses in England were French colonies, governed by priors of that nation. All but one of the Cluniac houses in Britain which were larger than cells were known as priories, symbolising their subordination to Cluny. The exception was the priory at Paisley which was raised to the status of an abbey in 1245 answerable only to the Pope. The head of the Order was the Abbot at Cluny. All English and Scottish Cluniacs were bound to cross to France to Cluny to consult or be consulted unless the abbot chose to come to Britain, which happened rarely.
Cistercian abbeys.
The Cistercians, a Benedictine reform, were established at Cîteaux in 1098 by St. Robert, Abbot of Molesme for the purpose of restoring as far as possible the literal observance of the Rule of St. Benedict. La Ferté, Pontigny, Clairvaux, and Morimond were the first four daughters of Cîteaux, which, in their turn, gave birth to many other monasteries. Cîteaux being the mother-abbey of the Cistercian Order, the abbot was recognized as head and superior general of the whole order. The monks of Cîteaux created the vineyards of Clos-Vougeot and Romanée, the most celebrated of Burgundy.
The rigid self-abnegation, which was the ruling principle of this reformed congregation of the Benedictine order, extended itself to the churches and other buildings erected by them. The defining architectural characteristic of the Cistercian abbeys was the most extreme simplicity and a studied plainness. Only a single, central tower was permitted, and that was to be very low. Unnecessary pinnacles and turrets were prohibited. The triforium was omitted. The windows were to be plain and undivided, and it was forbidden to decorate them with stained glass. All needless ornament was proscribed. The crosses must be of wood; the candlesticks of iron. The renunciation of the world was to be evidenced in all that met the eye.
The same spirit manifested itself in the choice of the sites of their monasteries. The more dismal, the more savage, the more hopeless a spot appeared, the more did it please their rigid mood. But they came not merely as ascetics, but as improvers. The Cistercian monasteries are, as a rule, found placed in deep, well-watered valleys. They always stand on the border of a stream; often with the buildings extending over it, as at Fountains Abbey. These valleys, now so rich and productive, had a very different appearance when the brethren first chose them as their place of retreat. Wide swamps, deep morasses, tangled thickets, and wild, impassable forests were their prevailing features. The 'bright valley,' Clara Vallis of St Bernard, was known as the 'Valley of Wormwood,' infamous as a den of robbers. '
See also:
Austin Canons.
The buildings of the Austin canons or Black canons (so called from the colour of their habit) present few distinctive peculiarities. This order had its first seat in England at St. Botolph's Priory, Colchester, Essex, where a house for Austin canons was founded about AD 1105, and it very soon spread widely. As an order of regular clergy, holding a middle position between monks and secular canons, almost resembling a community of parish priests living under rule, they adopted naves of great length to accommodate large congregations. The choir is usually long, and is sometimes, as at Llanthony and Christchurch (Twynham), shut off from the aisles, or, as at Bolton, Kirkham, etc., is destitute of aisles altogether. The nave in the northern houses, not infrequently, had only a north aisle, as at Bolton, Brinkburn and Lanercost. The arrangement of the monastic buildings followed the ordinary type. The prior's lodge was almost invariably attached to the S.W. angle of the nave.
The above plan of the Abbey of St Augustine's at Bristol, now the cathedral church of that city, shows the arrangement of the buildings, which departs very little from the ordinary Benedictine type. The Austin canons' house at Thornton, in Lincolnshire, is remarkable for the size and magnificence of its gate-house, the upper floors of which formed the guest-house of the establishment, and for possessing an octagonal chapter-house of Decorated date.
Premonstratensians.
The Premonstratensian regular canons, or White canons, had as many as 35 houses in England, of which the most perfect remaining are those of Easby, Yorkshire, and Bayham, Kent. The head house of the order in England was Welbeck. This order was a reformed branch of the Augustinian canons, founded, AD 1119, by Norbert of Xanten, on the Lower Rhine, c. 1080) at Prémontré, a secluded marshy valley in the forest of Coucy in the diocese of Laon. The order spread widely. Even in the founder's lifetime it possessed houses in Aleppo and Kingdom of Jerusalem where 'The Premonstrntensian abbey of Saint Samuel was a daughter house of Prémontré itself. Its abbot had the status of a suffragan of the patriarch of Jerusalem, with the right to
a cross but not to a mitre nor a ring'. It long maintained its rigid austerity, until in the course of years wealth impaired its discipline, and its members sank into indolence and luxury. The Premonstratensians were brought to England shortly after AD 1140, and were first settled at Newhouse, in Lincolnshire, near the Humber. The ground-plan of Easby Abbey, owing to its situation on the edge of the steeply sloping banks of a river, is singularly irregular. The cloister is duly placed on the south side of the church, and the chief buildings occupy their usual positions round it. But the cloister garth, as at Chichester, is not rectangular, and all the surrounding buildings are thus made to sprawl in a very awkward fashion. The church follows the plan adopted by the Austin canons in their northern abbeys, and has only one aisle to the nave—that to the north; while the choir is long, narrow and aisleless. Each transept has an aisle to the east, forming three chapels.
The church at Bayham was destitute of aisles either to nave or choir. The latter terminated in a three-sided apse. This church is remarkable for its exceeding narrowness in proportion to its length. Extending in longitudinal dimensions , it is not more than broad. Stern Premonstratensian canons wanted no congregations, and cared for no possessions; therefore they built their church like a long room.
The Premonstratension order still exists and a small group of these 'Chanones de Premontre' now run the former Benedictine Abbey at Conques in southwest France, which has become well known as a refuge for pilgrims travelling the Way of Saint James, from Le Puy en Velay in Auvergne, to Santiago de Compostela in Galicia, Spain.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1316'>
Annales School
The Annales School () is a group of historians associated with a style of historiography developed by French historians in the 20th century to stress long-term social history. It is named after its scholarly journal 'Annales d'histoire économique et sociale', which remains the main source of scholarship, along with many books and monographs. The school has been highly influential in setting the agenda for historiography in France and numerous other countries, especially regarding the use of social scientific methods by historians, emphasizing social rather than political or diplomatic themes, and for being generally hostile to the class analysis of Marxist historiography.
The school deals primarily with late medieval and early modern Europe (before the French Revolution), with little interest in later topics. It has dominated French social history and influenced historiography in Europe and Latin America. Prominent leaders include co-founders Lucien Febvre (1878–1956) and Marc Bloch (1886–1944). The second generation was led by Fernand Braudel (1902–1985) and included Georges Duby (1919–1996), Pierre Goubert (1915–2012), (1921–1984), Pierre Chaunu (1923–2009), Jacques Le Goff (1924–2014), and Ernest Labrousse (1895–1988). Institutionally it is based on the 'Annales' journal, the SEVPEN publishing house, the (FMSH), and especially the 6th Section of the École pratique des hautes études, all based in Paris. A third generation was led by Emmanuel Le Roy Ladurie (1929– ) and includes Jacques Revel, and Philippe Ariès (1914–1984), who joined the group in 1978. The third generation stressed history from the point of view of mentalities, or 'mentalités'. The fourth generation of Annales historians, led by Roger Chartier (1945– ), clearly distanced itself from the mentalities approach, replaced by the cultural and linguistic turn, which emphasize analysis of the social history of cultural practices.
The main scholarly outlet has been the journal 'Annales d'Histoire Economique et Sociale' ('Annals of economic and social history'), founded in 1929 by Lucien Febvre and Marc Bloch, which broke radically with traditional historiography by insisting on the importance of taking all levels of society into consideration and emphasized the collective nature of mentalities. Its contributors viewed events as less fundamental than the mental frameworks that shaped decisions and practices.
Braudel was editor of 'Annales' from 1956 to 1968, followed by the medievalist Jacques Le Goff. However, Braudel's informal successor as head of the school was Le Roy Ladurie. Noting the political upheavals in Europe and especially in France in 1968, Eric Hobsbawm argues that 'in France the virtual hegemony of Braudelian history and the 'Annales' came to an end after 1968, and the international influence of the journal dropped steeply.' Multiple responses were attempted by the school. Scholars moved in multiple directions, covering in disconnected fashion the social, economic, and cultural history of different eras and different parts of the globe. By the time of crisis the school was building a vast publishing and research network reaching across France, Europe, and the rest of the world. Influence indeed spread out from Paris, but few new ideas came in. Much emphasis was given to quantitative data, seen as the key to unlocking all of social history. However, the Annales ignored the developments in quantitative studies underway in the U.S. and Britain, which reshaped economic, political and demographic research. An attempt to require an 'Annales'-written textbook for French schools was rejected by the government. By 1980 postmodern sensibilities undercut confidence in overarching metanarratives. As Jacques Revel notes, the success of the Annales School, especially its use of social structures as explanatory forces contained the seeds of its own downfall, for there is 'no longer any implicit consensus on which to base the unity of the social, identified with the real.' The Annales School kept its infrastructure, but lost its 'mentalités'.
The journal.
The journal began in Strasbourg as 'Annales d'histoire économique et sociale'; it moved to Paris and kept the same name from 1929 to 1939. It was successively renamed 'Annales d'histoire sociale' (1939–1942, 1945), 'Mélanges d'histoire sociale' (1942–1944), 'Annales. Economies, sociétés, civilisations' (1946–1994), and 'Annales. Histoire, Sciences Sociales' (1994– ).
In 1962 Braudel and Gaston Berger used Ford Foundation money and government funds to create a new independent foundation, the (FMSH), which Braudel directed from 1970 until his death. In 1970 the 6th Section and the 'Annales' relocated to the FMSH building. FMSH set up elaborate international networks to spread the 'Annales' gospel across Europe and the world. In 2013 it began publication of an English language edition, with all the articles translated.
The scope of topics covered by the journal is vast and experimental—there is a search for total history and new approaches. The emphasis is on social history, and very long-term trends, often using quantification and paying special attention to geography and to the intellectual world view of common people, or 'mentality' ('mentalité'). Little attention is paid to political, diplomatic, or military history, or to biographies of famous men. Instead the 'Annales' focused attention on the synthesizing of historical patterns identified from social, economic, and cultural history, statistics, medical reports, family studies, and even psychoanalysis.
Origins.
The 'Annales' was founded and edited by Marc Bloch and Lucien Febvre in 1929, while they were teaching at the University of Strasbourg and later in Paris. These authors, the former a medieval historian and the latter an early modernist, quickly became associated with the distinctive 'Annales' approach, which combined geography, history, and the sociological approaches of the 'Année Sociologique' (many members of which were their colleagues at Strasbourg) to produce an approach which rejected the predominant emphasis on politics, diplomacy and war of many 19th and early 20th-century historians as spearheaded by historians whom Febvre called Les Sorbonnistes. Instead, they pioneered an approach to a study of long-term historical structures ('la longue durée') over events and political transformations. Geography, material culture, and what later Annalistes called 'mentalités,' or the psychology of the epoch, are also characteristic areas of study. The goal of the Annales was to undo the work of the Sorbonnistes, to turn French historians away from the narrowly political and diplomatic toward the new vistas in social and economic history.
Co-founder Marc Bloch (1886–1944) was a quintessential modernist who studied at the elite École Normale Supérieure, and in Germany, serving as a professor at the University of Strasbourg until he was called to the Sorbonne in Paris in 1936 as professor of economic history. Bloch's interests were highly interdisciplinary, influenced by the geography of Paul Vidal de la Blache (1845–1918) and the sociology of Émile Durkheim (1858–1917). His own ideas, especially those expressed in his masterworks, 'French Rural History' ('Les caractères originaux de l'histoire rurale française,' 1931) and 'Feudal Society', were incorporated by the second-generation Annalistes, led by Fernand Braudel.
Precepts.
Georges Duby, a leader of the school, wrote that the history he taught:
The Annalistes, especially Lucien Febvre, advocated a 'histoire totale', or 'histoire tout court', a complete study of a historic problem.
Postwar.
Bloch was shot by the Gestapo during the German occupation of France in World War II for his active membership of the French Resistance, and Febvre carried on the 'Annales' approach in the 1940s and 1950s. It was during this time that he mentored Braudel, who would become one of the best-known exponents of this school. Braudel's work came to define a 'second' era of 'Annales' historiography and was very influential throughout the 1960s and 1970s, especially for his work on the Mediterranean region in the era of Philip II of Spain.
Braudel developed the idea, often associated with Annalistes, of different modes of historical time: 'l'histoire quasi immobile' (the somewhat motionless history) of historical geography, the history of social, political and economic structures ('la longue durée'), and the history of men and events, in the context of their structures.
While authors such as Emmanuel Le Roy Ladurie, Marc Ferro and Jacques Le Goff continue to carry the 'Annales' banner, today the 'Annales' approach has been less distinctive as more and more historians do work in cultural history, political history and economic history.
'Mentalités'.
Bloch's 'Les Rois Thaumaturges' (1924) looked at the long-standing folk belief that the king could cure scrofula by his thaumaturgic touch. The kings of France and England indeed regularly practiced the ritual. Bloch was not concerned with the effectiveness of the royal touch—he acted instead like an anthropologist in asking why people believed it and how it shaped relations between king and commoner. The book was highly influential in introducing comparative studies (in this case France and England), as well as long durations ('longue durée') studies spanning several centuries, even up to a thousand years, downplaying short-term events. Bloch's revolutionary charting of mentalities, or 'mentalités', resonated with scholars who were reading Freud and Proust. In the 1960s, Robert Mandrou and Georges Duby harmonized the concept of 'mentalité' history with Fernand Braudel's structures of historical time and linked mentalities with changing social conditions. A flood of 'mentalité' studies based on these approaches appeared during the 1970s and 1980s. By the 1990s, however, 'mentalité' history had become interdisciplinary to the point of fragmentation, but still lacked a solid theoretical basis. While not explicitly rejecting 'mentalité' history, younger historians increasingly turned to other approaches.
Braudel.
Fernand Braudel became the leader of the second generation after 1945. He obtained funding from the Rockefeller Foundation in New York and founded the 6th Section of the Ecole Pratique des Hautes Etudes, which was devoted to the study of history and the social sciences. It became an independent degree-granting institution in 1975 under the name École des Hautes Études en Sciences Sociales (EHESS). Braudel's followers admired his use of the longue durée approach to stress slow, and often imperceptible effects of space, climate and technology on the actions of human beings in the past. The 'Annales' historians, after living through two world wars and incredible political upheavals in France, were deeply uncomfortable with the notion that multiple ruptures and discontinuities created history. They preferred to stress inertia and the longue durée. Special attention was paid to geography, climate, and demography as long-term factors. They believed the continuities of the deepest structures were central to history, beside which upheavals in institutions or the superstructure of social life were of little significance, for history lies beyond the reach of conscious actors, especially the will of revolutionaries. They rejected the Marxist idea that history should be used as a tool to foment and foster revolutions. In turn the Marxists called them conservatives.
Braudel's first book, 'La Méditerranée et le Monde Méditerranéen à l'Epoque de Philippe II' (1949) ('The Mediterranean and the Mediterranean World in the Age of Philip II') was his most influential. This vast panoramic view used ideas from other social sciences, employed effectively the technique of the longue durée, and downplayed the importance of specific events and individuals. It stressed geography but not 'mentalité'. It was widely admired, but most historians did not try to replicate it and instead focused on their specialized monographs. The book dramatically raised the worldwide profile of the Annales School.
Regionalism.
Before 'Annales,' French history supposedly happened in Paris. Febvre broke decisively with this paradigm in 1912, with his sweeping doctoral thesis on 'Philippe II et la Franche-Comté.' The geography and social structure of this region overwhelmed and shaped the king's policies set in Paris.
The 'Annales' historians did not try to replicate Braudel's vast geographical scope in 'La Méditerranée.' Instead they focused on regions in France over long stretches of time. The most important was the study of the 'Peasants of Languedoc' by Braudel's star pupil and successor Emmanuel Le Roy Ladurie. The regionalist tradition flourished especially in the 1960s and 1970s in the work of Pierre Goubert in 1960 on Beauvais and René Baehrel on Basse-Provence. 'Annales' historians in the 1970s and 1980s turned to urban regions, including Pierre Deyon (Amiens), Maurice Garden (Lyon), Jean-Pierre Bardet (Rouen), Georges Freche (Toulouse), and Jean-Claude Perrot (Caen). By the 1970s the shift was underway from the earlier economic history to cultural history and the history of mentalities.
Impact outside France.
The 'Annales' school systematically reached out to create an impact on other countries. Its success varied widely. The 'Annales' approach was especially well received in Italy and Poland. Franciszek Bujak (1875–1953) and Jan Rutkowski (1886–1949), the founders of modern economic history in Poland and of the journal 'Roczniki Dziejów Spolecznych i Gospodarczych' (1931– ), were attracted to the innovations of the Annales school. Rutkowski was in contact with Bloch and others, and published in the 'Annales.' After the Communists took control in the 1940s Polish scholars were safer working on the Middle Ages and the early modern era rather than contemporary history. After the 'Polish October' of 1956 the Sixth Section in Paris welcomed Polish historians and exchanges between the circle of the 'Annales' and Polish scholars continued until the early 1980s. The reciprocal influence between the French school and Polish historiography was particularly evident in studies on the Middle Ages and the early modern era studied by Braudel.
In South America the 'Annales' approach became popular. From the 1950s Federico Brito Figueroa was the founder of a new Venezuelan historiography based largely on the ideas of the Annales School. Brito Figueroa carried his conception of the field to all levels of university study, emphasizing a systematic and scientific approach to history and placing it squarely in the social sciences. Spanish historiography was influenced by the 'Annales School' starting in 1950 with Jaime Vincens Vives (1910–1960). In Mexico, exiled Republican intellectuals extended the Annales approach, particularly from the Center for Historical Studies of El Colegio de México, the leading graduate studies institution of Latin America.
British historians, apart from a few Marxists, were generally hostile. Academic historians decidedly sided with Geoffrey Elton's 'The Practice of History' against Edward Hallett Carr's 'What Is History?'. American, German, Indian, Russian and Japanese scholars generally ignored the school. The Americans developed their own form of 'new social history' from entirely different routes. Both the American and the 'Annales' historians picked up important family reconstitution techniques from French demographer Louis Henry.
Current.
The current leader is Roger Chartier, who is Directeur d'Études at the École des Hautes Études en Sciences Sociales in Paris, Professeur in the Collège de France, and Annenberg Visiting Professor of History at the University of Pennsylvania. He frequently lectures and teaches in the United States, Spain, Mexico, Brazil and Argentina. His work in Early Modern European History focuses on the history of education, the history of the book and the history of reading. Recently, he has been concerned with the relationship between written culture as a whole and literature (particularly theatrical plays) for France, England and Spain. His work in this specific field (based on the criss-crossing between literary criticism, bibliography, and sociocultural history) is connected to broader historiographical and methodological interests which deal with the relation between history and other disciplines: philosophy, sociology, anthropology.
Chartier's typical undergraduate course focuses upon the making, remaking, dissemination, and reading of texts in early modern Europe and America. Under the heading of 'practices,' his class considers how readers read and marked up their books, forms of note-taking, and the interrelation between reading and writing from copying and translating to composing new texts. Under the heading of 'materials,' his class examines the relations between different kinds of writing surfaces (including stone, wax, parchment, paper, walls, textiles, the body, and the heart), writing implements (including styluses, pens, pencils, needles, and brushes), and material forms (including scrolls, erasable tables, codices, broadsides and printed forms and books). Under the heading of 'places,' his class explores where texts were made, read, and listened to, including monasteries, schools and universities, offices of the state, the shops of merchants and booksellers, printing houses, theaters, libraries, studies, and closets. The texts for his course include the 'Bible', translations of Ovid, 'Hamlet', 'Don Quixote', Montaigne's essays, Pepys's diary, Richardson's 'Pamela', and Franklin's autobiography.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1317'>
Antimatter
In particle physics, antimatter is material composed of antiparticles, which have the same mass as particles of ordinary matter but have opposite charge and other particle properties such as lepton and baryon number. Encounters between particles and antiparticles lead to the annihilation of both, giving rise to varying proportions of high-energy photons (gamma rays), neutrinos, and lower-mass particle–antiparticle pairs. Setting aside the mass of any product neutrinos, which represent released energy which generally continues to be unavailable, the end result of annihilation is a release of energy available to do work, proportional to the total matter and antimatter mass, in accord with the mass-energy equivalence equation, 'E'='mc'2.
Antiparticles bind with each other to form antimatter just as ordinary particles bind to form normal matter. For example, a positron (the antiparticle of the electron) and an antiproton can form an antihydrogen atom. Physical principles indicate that complex antimatter atomic nuclei are possible, as well as anti-atoms corresponding to the known chemical elements. To date, however, anti-atoms more complex than antihelium have neither been artificially produced nor observed in nature. Studies of cosmic rays have identified both positrons and antiprotons, presumably produced by high-energy collisions between particles of ordinary matter.
There is considerable speculation as to why the observable universe is apparently composed almost entirely of ordinary matter, as opposed to a more symmetric combination of matter and antimatter. This asymmetry of matter and antimatter in the visible universe is one of the greatest unsolved problems in physics. The process by which this asymmetry between particles and antiparticles developed is called baryogenesis.
Antimatter in the form of anti-atoms is one of the most difficult materials to produce. Antimatter in the form of individual anti-particles, however, is commonly produced by particle accelerators and in some types of radioactive decay.
History of the concept.
The idea of negative matter appears in past theories of matter that have now been abandoned. Using the once popular vortex theory of gravity, the possibility of matter with negative gravity was discussed by William Hicks in the 1880s. Between the 1880s and the 1890s, Karl Pearson proposed the existence of 'squirts' (sources) and sinks of the flow of aether. The squirts represented normal matter and the sinks represented negative matter. Pearson's theory required a fourth dimension for the aether to flow from and into.
The term antimatter was first used by Arthur Schuster in two rather whimsical letters to 'Nature' in 1898, in which he coined the term. He hypothesized antiatoms, as well as whole antimatter solar systems, and discussed the possibility of matter and antimatter annihilating each other. Schuster's ideas were not a serious theoretical proposal, merely speculation, and like the previous ideas, differed from the modern concept of antimatter in that it possessed negative gravity.
The modern theory of antimatter began in 1928, with a paper by Paul Dirac. Dirac realised that his relativistic version of the Schrödinger wave equation for electrons predicted the possibility of antielectrons. These were discovered by Carl D. Anderson in 1932 and named positrons (a contraction of 'positive electrons'). Although Dirac did not himself use the term antimatter, its use follows on naturally enough from antielectrons, antiprotons, etc. A complete periodic table of antimatter was envisaged by Charles Janet in 1929.
Notation.
One way to denote an antiparticle is by adding a bar over the particle's symbol. For example, the proton and antiproton are denoted as and , respectively. The same rule applies if one were to address a particle by its constituent components. A proton is made up of quarks, so an antiproton must therefore be formed from antiquarks. Another convention is to distinguish particles by their electric charge. Thus, the electron and positron are denoted simply as and respectively. However, to prevent confusion, the two conventions are never mixed.
Origin and asymmetry.
Almost all matter observable from the Earth seems to be made of matter rather than antimatter. If antimatter-dominated regions of space existed, the gamma rays produced in annihilation reactions along the boundary between matter and antimatter regions would be detectable.
Antiparticles are created everywhere in the universe where high-energy particle collisions take place. High-energy cosmic rays impacting Earth's atmosphere (or any other matter in the Solar System) produce minute quantities of antiparticles in the resulting particle jets, which are immediately annihilated by contact with nearby matter. They may similarly be produced in regions like the center of the Milky Way and other galaxies, where very energetic celestial events occur (principally the interaction of relativistic jets with the interstellar medium). The presence of the resulting antimatter is detectable by the two gamma rays produced every time positrons annihilate with nearby matter. The frequency and wavelength of the gamma rays indicate that each carries 511 keV of energy (i.e., the rest mass of an electron multiplied by 'c'2).
Recent observations by the European Space Agency's INTEGRAL satellite may explain the origin of a giant cloud of antimatter surrounding the galactic center. The observations show that the cloud is asymmetrical and matches the pattern of X-ray binaries (binary star systems containing black holes or neutron stars), mostly on one side of the galactic center. While the mechanism is not fully understood, it is likely to involve the production of electron–positron pairs, as ordinary matter gains tremendous energy while falling into a stellar remnant.
Antimatter may exist in relatively large amounts in far-away galaxies due to cosmic inflation in the primordial time of the universe. Antimatter galaxies, if they exist, are expected to have the same chemistry and absorption and emission spectra as normal-matter galaxies, and their astronomical objects would be observationally identical, making them difficult to distinguish. NASA is trying to determine if such galaxies exist by looking for X-ray and gamma-ray signatures of annihilation events in colliding superclusters.
Natural production.
Positrons are produced naturally in β+ decays of naturally occurring radioactive isotopes (for example, potassium-40) and in interactions of gamma quanta (emitted by radioactive nuclei) with matter. Antineutrinos are another kind of antiparticle created by natural radioactivity (β− decay). Many different kinds of antiparticles are also produced by (and contained in) cosmic rays. Recent (as of January 2011) research by the American Astronomical Society has discovered antimatter (positrons) originating above thunderstorm clouds; positrons are produced in gamma-ray flashes created by electrons accelerated by strong electric fields in the clouds. Antiprotons have also been found to exist in the Van Allen Belts around the Earth by the PAMELA module.
Antiparticles are also produced in any environment with a sufficiently high temperature (mean particle energy greater than the pair production threshold). During the period of baryogenesis, when the universe was extremely hot and dense, matter and antimatter were continually produced and annihilated. The presence of remaining matter, and absence of detectable remaining antimatter, also called baryon asymmetry, is attributed to CP-violation: a violation of the CP-symmetry relating matter to antimatter. The exact mechanism of this violation during baryogenesis remains a mystery.
Positrons can be produced by radioactive decay, but this mechanism can occur both naturally and artificially.
Observation in cosmic rays.
Satellite experiments have found evidence of positrons and a few antiprotons in primary cosmic rays, amounting to less than 1% of the particles in primary cosmic rays. These do not appear to be the products of large amounts of antimatter from the Big Bang, or indeed complex antimatter in the universe. Rather, they appear to consist of only these two elementary particles, newly made in energetic processes.
Preliminary results from the presently operating Alpha Magnetic Spectrometer ('AMS-02') on board the International Space Station show that positrons in the cosmic rays arrive with no directionality, and with energies that range from 10 to 250 GeV, with the fraction of positrons to electrons increasing at higher energies. These results on interpretation have been suggested to be due to positron production in annihilation events of massive dark matter particles.
Antiprotons arrive at Earth with a characteristic energy maximum of 2 GeV, indicating their production in a fundamentally different process from cosmic ray protons, which on average have only one-sixth of the energy.
There is no evidence of complex antimatter atomic nuclei, such as antihelium nuclei (i.e., anti-alpha particles), in cosmic rays. These are actively being searched for. A prototype of the 'AMS-02' designated 'AMS-01', was flown into space aboard the on STS-91 in June 1998. By not detecting any antihelium at all, the 'AMS-01' established an upper limit of 1.1×10−6 for the antihelium to helium flux ratio.
Artificial production.
Positrons.
Positrons were reported in November 2008 to have been generated by Lawrence Livermore National Laboratory in larger numbers than by any previous synthetic process. A laser drove electrons through a millimeter-radius gold target's nuclei, which caused the incoming electrons to emit energy quanta that decayed into both matter and antimatter. Positrons were detected at a higher rate and in greater density than ever previously detected in a laboratory. Previous experiments made smaller quantities of positrons using lasers and paper-thin targets; however, new simulations showed that short, ultra-intense lasers and millimeter-thick gold are a far more effective source.
Antiprotons, antineutrons, and antinuclei.
The existence of the antiproton was experimentally confirmed in 1955 by University of California, Berkeley physicists Emilio Segrè and Owen Chamberlain, for which they were awarded the 1959 Nobel Prize in Physics. An antiproton consists of two up antiquarks and one down antiquark (). The properties of the antiproton that have been measured all match the corresponding properties of the proton, with the exception of the antiproton having opposite electric charge and magnetic moment from the proton. Shortly afterwards, in 1956, the antineutron was discovered in proton–proton collisions at the Bevatron (Lawrence Berkeley National Laboratory) by Bruce Cork and colleagues.
In addition to antibaryons, anti-nuclei consisting of multiple bound antiprotons and antineutrons have been created. These are typically produced at energies far too high to form antimatter atoms (with bound positrons in place of electrons). In 1965, a group of researchers led by Antonino Zichichi reported production of nuclei of antideuterium at the Proton Synchrotron at CERN. At roughly the same time, observations of antideuterium nuclei were reported by a group of American physicists at the Alternating Gradient Synchrotron at Brookhaven National Laboratory.
Antihydrogen atoms.
In 1995, CERN announced that it had successfully brought into existence nine antihydrogen atoms by implementing the SLAC/Fermilab concept during the PS210 experiment. The experiment was performed using the Low Energy Antiproton Ring (LEAR), and was led by Walter Oelert and Mario Macri. Fermilab soon confirmed the CERN findings by producing approximately 100 antihydrogen atoms at their facilities. The antihydrogen atoms created during PS210 and subsequent experiments (at both CERN and Fermilab) were extremely energetic ('hot') and were not well suited to study. To resolve this hurdle, and to gain a better understanding of antihydrogen, two collaborations were formed in the late 1990s, namely, ATHENA and ATRAP. In 2005, ATHENA disbanded and some of the former members (along with others) formed the ALPHA Collaboration, which is also based at CERN. The primary goal of these collaborations is the creation of less energetic ('cold') antihydrogen, better suited to study.
In 1999, CERN activated the Antiproton Decelerator, a device capable of decelerating antiprotons from to — still too 'hot' to produce study-effective antihydrogen, but a huge leap forward. In late 2002 the ATHENA project announced that they had created the world's first 'cold' antihydrogen. The ATRAP project released similar results very shortly thereafter. The antiprotons used in these experiments were cooled by decelerating them with the Antiproton Decelerator, passing them through a thin sheet of foil, and finally capturing them in a Penning-Malmberg trap. The overall cooling process is workable, but highly inefficient; approximately 25 million antiprotons leave the Antiproton Decelerator and roughly 25,000 make it to the Penning-Malmberg trap, which is about or 0.1% of the original amount.
The antiprotons are still hot when initially trapped. To cool them further, they are mixed into an electron plasma. The electrons in this plasma cool via cyclotron radiation, and then sympathetically cool the antiprotons via Coulomb collisions. Eventually, the electrons are removed by the application of short-duration electric fields, leaving the antiprotons with energies less than 100 meV. While the antiprotons are being cooled in the first trap, a small cloud of positrons is captured from radioactive sodium in a Surko-style positron accumulator. This cloud is then recaptured in a second trap near the antiprotons. Manipulations of the trap electrodes then tip the antiprotons into the positron plasma, where some combine with antiprotons to form antihydrogen. This neutral antihydrogen is unaffected by the electric and magnetic fields used to trap the charged positrons and antiprotons, and within a few microseconds the antihydrogen hits the trap walls, where it annihilates. Some hundreds of millions of antihydrogen atoms have been made in this fashion.
Most of the sought-after high-precision tests of the properties of antihydrogen could only be performed if the antihydrogen were trapped, that is, held in place for a relatively long time. While antihydrogen atoms are electrically neutral, the spins of their component particles produce a magnetic moment. These magnetic moments can interact with an inhomogeneous magnetic field; some of the antihydrogen atoms can be attracted to a magnetic minimum. Such a minimum can be created by a combination of mirror and multipole fields.
Antihydrogen can be trapped in such a magnetic minimum (minimum-B) trap; in November 2010, the ALPHA collaboration announced that they had so trapped 38 antihydrogen atoms for about a sixth of a second. This was the first time that neutral antimatter had been trapped.
On 26 April 2011, ALPHA announced that they had trapped 309 antihydrogen atoms, some for as long as 1,000 seconds (about 17 minutes). This was longer than neutral antimatter had ever been trapped before.
ALPHA has used these trapped atoms to initiate research into the spectral properties of the antihydrogen.
The biggest limiting factor in the large-scale production of antimatter is the availability of antiprotons. Recent data released by CERN states that, when fully operational, their facilities are capable of producing ten million antiprotons per minute. Assuming a 100% conversion of antiprotons to antihydrogen, it would take 100 billion years to produce 1 gram or 1 mole of antihydrogen (approximately atoms of antihydrogen).
Antihelium.
Antihelium-3 nuclei () were first observed in the 1970s in proton-nucleus collision experiments
and later created in nucleus-nucleus collision experiments. Nucleus-nucleus collisions produce antinuclei through the coalescense of antiprotons and antineutrons created in these reactions. In 2011, the STAR detector reported the observation of Antihelium-4 nuclei ().
Preservation.
Antimatter cannot be stored in a container made of ordinary matter because antimatter reacts with any matter it touches, annihilating itself and an equal amount of the container. Antimatter in the form of charged particles can be contained by a combination of electric and magnetic fields, in a device called a Penning trap. This device cannot, however, contain antimatter that consists of uncharged particles, for which atomic traps are used. In particular, such a trap may use the dipole moment (electric or magnetic) of the trapped particles. At high vacuum, the matter or antimatter particles can be trapped and cooled with slightly off-resonant laser radiation using a magneto-optical trap or magnetic trap. Small particles can also be suspended with optical tweezers, using a highly focused laser beam.
In 2011, CERN scientists were able to preserve antihydrogen for approximately 17 minutes.
Cost.
Scientists claim that antimatter is the costliest material to make. In 2006, Gerald Smith estimated $250 million could produce 10 milligrams of positrons (equivalent to $25 billion per gram); in 1999, NASA gave a figure of $62.5 trillion per gram of antihydrogen. This is because production is difficult (only very few antiprotons are produced in reactions in particle accelerators), and because there is higher demand for other uses of particle accelerators. According to CERN, it has cost a few hundred million Swiss Francs to produce about 1 billionth of a gram (the amount used so far for particle/antiparticle collisions).
Several studies funded by the NASA Institute for Advanced Concepts are exploring whether it might be possible to use magnetic scoops to collect the antimatter that occurs naturally in the Van Allen belt of the Earth, and ultimately, the belts of gas giants, like Jupiter, hopefully at a lower cost per gram.
Uses.
Medical.
Matter-antimatter reactions have practical applications in medical imaging, such as positron emission tomography (PET). In positive beta decay, a nuclide loses surplus positive charge by emitting a positron (in the same event, a proton becomes a neutron, and a neutrino is also emitted). Nuclides with surplus positive charge are easily made in a cyclotron and are widely generated for medical use. Antiprotons have also been shown within laboratory experiments to have the potential to treat certain cancers, in a similar method currently used for ion (proton) therapy.
Fuel.
Isolated and stored anti-matter could be used as a fuel for interplanetary or interstellar travel as part of an antimatter catalyzed nuclear pulse propulsion or other antimatter rocketry, such as the redshift rocket. Since the energy density of antimatter is higher than that of conventional fuels, an antimatter-fueled spacecraft would have a higher thrust-to-weight ratio than a conventional spacecraft.
If matter-antimatter collisions resulted only in photon emission, the entire rest mass of the particles would be converted to kinetic energy. The energy per unit mass () is about 10 orders of magnitude greater than chemical energies, and about 3 orders of magnitude greater than the nuclear potential energy that can be liberated, today, using nuclear fission (about per fission reaction or ), and about 2 orders of magnitude greater than the best possible results expected from fusion (about for the proton-proton chain). The reaction of of antimatter with of matter would produce (180 petajoules) of energy (by the mass-energy equivalence formula, 'E = mc2'), or the rough equivalent of 43 megatons of TNT – slightly less than the yield of the 27,000 kg Tsar Bomb, the largest thermonuclear weapon ever detonated.
Not all of that energy can be utilized by any realistic propulsion technology because of the nature of the annihilation products. While electron-positron reactions result in gamma ray photons, these are difficult to direct and use for thrust. In reactions between protons and antiprotons, their energy is converted largely into relativistic neutral and charged pions. The neutral pions decay almost immediately (with a half-life of 84 attoseconds) into high-energy photons, but the charged pions decay more slowly (with a half-life of 26 nanoseconds) and can be deflected magnetically to produce thrust.
Note that charged pions ultimately decay into a combination of neutrinos (carrying about 22% of the energy of the charged pions) and unstable charged muons (carrying about 78% of the charged pion energy), with the muons then decaying into a combination of electrons, positrons and neutrinos (cf. muon decay; the neutrinos from this decay carry about 2/3 of the energy of the muons, meaning that from the original charged pions, the total fraction of their energy converted to neutrinos by one route or another would be about 0.22 + (2/3)*0.78 = 0.74).
Weapons.
Antimatter has been considered as a trigger mechanism for nuclear weapons. A major obstacle is the difficulty of producing antimatter in large enough quantities, and there is no evidence that it will ever be feasible. However, the U.S. Air Force funded studies of the physics of antimatter in the Cold War, and began considering its possible use in weapons, not just as a trigger, but as the explosive itself.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1322'>
Casa Batlló
Casa Batlló () is a renowned building located in the center of Barcelona and is one of Antoni Gaudí’s masterpieces. A remodel of a previously built house, it was redesigned in 1904 by Gaudí and has been refurbished several times after that. Gaudí's assistants Domènec Sugrañes i Gras, Josep Canaleta and Joan Rubió also contributed to the renovation project. The local name for the building is 'Casa dels ossos' (House of Bones), as it has a visceral, skeletal organic quality. It was originally designed for a middle-class family and situated in a prosperous district of Barcelona.
Like everything Gaudí designed, it is only identifiable as Modernisme or Art Nouveau in the broadest sense. The ground floor, in particular, has unusual tracery, irregular oval windows and flowing sculpted stone work. There are few straight lines, and much of the façade is decorated with a colorful mosaic made of broken ceramic tiles (trencadís). The roof is arched and was likened to the back of a dragon or dinosaur. A common theory about the building is that the rounded feature to the left of centre, terminating at the top in a turret and cross, represents the lance of Saint George (patron saint of Catalonia, Gaudí's home), which has been plunged into the back of the dragon.
History.
Initial construction (1877).
The building that is now Casa Batlló was built in 1877 by Antoni Gaudi, commissioned by Lluís Sala Sánchez.
It was a classical building without remarkable characteristics within the eclecticism traditional by the end of the 19th century.The building had a basement, a ground floor, four other floors and a garden in the back.
Batlló family.
The house was bought by Josep Batlló in 1900. The design of the house made the home undesirable to buyers but the Batlló family decided to buy the place due to its centralized location. It is located in the middle of Passeig de Gracia, which in the early 20th century was known as a very prestigious and fashionable area. It was an area where the prestigious family could draw attention to themselves.
In 1904 Josep Batlló still owned the home. The Batlló family was very well known in Barcelona for its contribution to the textile industry in the city. Mr. Josep Batlló I Casanovas was a textile industrialist who owned a few factories in the city. Mr. Batlló married Amalia Godo Belaunzaran, from the family that founded the newspaper La Vanguardia. Josep wanted an architect that would design a house that was like no other and stood out as being audacious and creative. Both Josep and his wife were open to anything and they decided not to limit Gaudí. Josep did not want his house to resemble any of the houses of the rest of the Batlló family, such as Casa Pía, built by the Josep Vilaseca. He chose the architect who had designed Park Güell because he wanted him to come up with a risky plan. The family lived on the Noble Floor of Casa Batlló until the middle of the 1950s.
Renovation (1904-1906).
In 1904 Josep Batlló hired Gaudí to design his home; at first his plans were to tear down the building and construct a completely new house. Gaudí convinced Josep that a renovation was sufficient and was also able to submit the planning application the same year. The building was completed and refurbished in 1906. He completely changed the main apartment which became the residence for the Batlló family. He expanded the central well in order to supply light to the whole building and also added new floors. In the same year the Barcelona City Council selected the house as a candidate for that year’s best building award. The award was given to another architect that year despite Gaudí’s design.
Refurbishments.
Josep Batlló died in 1934 and the house was kept in order by the wife until her death in 1940 . After the death of the two parents the house was kept and managed by the children until 1954. In 1954 an insurance company named Seguros Iberia acquired Casa Batlló and set up offices there. In 1970, the first refurbishment occurred mainly in several of the interior rooms of the house. In 1983, the exterior balconies were restored to their original colour and a year later the exterior façade was illuminated in the ceremony of La Mercè.
Multiple uses.
In 1993, the current owners of Casa Batlló bought the home and continued refurbishments throughout the whole building. Two years later, in 1995, Casa Batlló began to hire out its facilities for different events. More than 2,500 square meters of rooms within the building were rented out for many different functions. Due to the buildings location and the beauty of the facilities being rented, the rooms of Casa Batlló were in very high demand and hosted many important events for the city.
Design.
Overview.
The local name for the building is 'Casa dels ossos' (House of Bones), as it has a visceral, skeletal organic quality. The building looks very remarkable — like everything Gaudí designed, only identifiable as Modernisme or Art Nouveau in the broadest sense. The ground floor, in particular, is rather astonishing with tracery, irregular oval windows and flowing sculpted stone work.
It seems that the goal of the designer was to avoid straight lines completely. Much of the façade is decorated with a mosaic made of broken ceramic tiles (trencadís) that starts in shades of golden orange moving into greenish blues. The roof is arched and was likened to the back of a dragon or dinosaur. A common theory about the building is that the rounded feature to the left of centre, terminating at the top in a turret and cross, represents the lance of Saint George (patron saint of Catalonia, Gaudí's home), which has been plunged into the back of the dragon.
Loft.
The loft is considered to be one of the most unusual spaces. It was formerly a service area for the tenants of the different apartments in the building which contained laundry rooms and storage areas. It is known for its simplicity of shapes and its Mediterranean influence through the use of white on the walls. It contains a series of sixty Catenary arches that creates a space which represents the ribcage of an animal. Some people believe that the “ribcage” design of the arches is a ribcage for the dragon’s spine that is represented in the roof.
Noble floor and museum.
The noble floor is larger than seven-hundred square meters, it is the main floor of the building. The noble floor is accessed through a private entrance hall that utilizes skylights resembling tortoise shells and vaulted walls in curving shapes. On the noble floor, there is a spacious landing with direct views to the blue tiling of the building well. On the Passeig de Gracia side is Mr. Batlló’s study, a festejador and a secluded spot for courting couples, decorated with a mushroom-shaped fireplace . The elaborate and animal-like décor continues throughout the whole noble floor.
In 2002, the house opened its doors to the public and people were allowed to visit the noble floor. The building was opened to the public as part of the celebration of the International Year of Gaudí. Casa Batlló with very much unanticipated success and visitors became eager to see the rest of the house. Two years later, in celebration of the one hundredth anniversary of the beginning of work on Casa Batlló the fifth floor was restored and the house extended its visit to the loft and the well. In 2005, Casa Batlló became a Unesco World Heritage Site.
Roof.
The roof terrace is one of the most popular features of the entire house due to its famous dragon back design. Gaudí represents an animal’s spine by using tiles of different colors on one side. The roof is decorated with four chimney stacks, that are designed to prevent backdraughts.
Exterior facade.
The facade has three distinct sections which are harmoniously integrated. The lower ground floor with the main floor and two first-floor galleries are contained in a structure of Montjuïc sandstone with undulating lines. The central part, which reaches the last floor, is a multicolored section with protruding balconies. The top of the building is a crown, like a huge gable, which is at the same level as the roof and helps to conceal the room where there used to be water tanks. This room is currently empty. The top displays a trim with ceramic pieces that has attracted multiple interpretations.
The roof's arched profile recalls the spine of a dragon with ceramic tiles for scales, and a small triangular window towards the right of the structure simulates the eye. Legend has it that it was once possible to see the Sagrada Familia through this window, which was being built simultaneously. The view of the Sagrada Familia is now blocked from this vantage point by newer buildings. The tiles were given a metallic sheen to simulate the varying scales of the monster, with the color grading from green on the right side, where the head begins, to deep blue and violet in the center, to red and pink on the left side of the building.
One of the highlights of the facade is a tower topped with a cross of four arms oriented to the cardinal directions. It is a bulbous, root-like structure that evokes plant life. There is a second bulb-shaped structure similarly reminiscent of a thalamus flower, which is represented by a cross with arms that are actually buds announcing the next flowering. The tower is decorated with monograms of Jesus (JHS), Maria (M with the ducal crown) and Joseph (JHP), made of ceramic pieces that stand out golden on the green background that covers the facade. These symbols show the deep religiosity of Gaudi, who was inspired by the contemporaneous construction of his basilica to choose the theme of the holy family.
The bulb was broken when it was delivered, perhaps during transportation. Although the manufacturer committed to re-do the broken parts, Gaudí liked the aesthetic of the broken masonry and asked that the pieces be stuck to the main structure with lime mortar and held in with a brass ring.
The central part of the facade evokes the surface of a lake with water lilies, reminiscent of Monet's Nymphéas, with gentle ripples and reflections caused by the glass and ceramic mosaic. It is a great undulating surface covered with plaster fragments of colored glass discs combined with 330 rounds of polychrome pottery. The discs were designed by Gaudí and Jujol between tests during their stay in Majorca, while working on the restoration of the Cathedral of Palma.
Finally, above the central part of the facade is a smaller balcony, also iron, with a different exterior aesthetic, closer to a local type of lily. Two iron arms were installed here to support a pulley to raise and lower furniture.
The facade of the main floor, made entirely in sandstone, and is supported by two columns. The design is complemented by joinery windows set with multicolored stained glass. In front of the large windows, as if they were pillars that support the complex stone structure, there are six fine columns that seem to simulate the bones of a limb, with an apparent central articulation; in fact, this is a floral decoration. The rounded shapes of the gaps and the lip-like edges carved into the stone surrounding them create a semblance of a fully open mouth, for which the Casa Batlló has been nicknamed the 'house of yawns.' The structure repeats on the first floor and in the design of two windows at the ends forming galleries, but on the large central window there are two balconies as described above.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1324'>
Park Güell
The Park Güell (Catalan: Parc Güell) is a public park system composed of gardens and architectonic elements located on Carmelo Hill, in Barcelona (Spain). Carmelo Hill belongs to the mountain range of Sierra de Collserola — the Parque del Carmelo (Catalan: Parc del Carmel) is located on the northern face. Park Güell is located in La Salut, a neighborhood in the Gràcia district of Barcelona. With urbanization in mind, Eusebi Güell assigned the design of the park to Antoni Gaudí, a renowned architect and the face of Catalonian modernism. The park was built between 1900 and 1914 and was officially opened as a public park in 1926. In 1984, UNESCO declared the park a World Heritage Site under “Works of Antoni Gaudí”.
Park Güell is the reflexion of Gaudí’s artistic plenitude, which belongs to his naturalist phase (first decade of the 20th century). During this period, the architect perfected his personal style through inspiration from organic shapes found in nature. He put into practice a series of new structural solutions rooted in the deep analysis of geometry and its shapes. To that, the Catalonian artist adds creative liberty and an imaginative, ornamental creation. Starting from a sort of baroquism, his works acquire a structural richness of forms and volumes, free of the rational rigidity or any sort of classic premisses. In the design of Park Güell, Gaudí unleashed all his architectonic genius and put to practice much of his innovative structural solutions that would become the symbol of his organic style and that would culminate in the creation of the Basilica and Expiatory Church of the Holy Family (Catalan: Sagrada Familia).
Güell and Gaudí conceived this park, situated within a natural park of incomparable beauty. They imagined an organized grouping of high-quality homes, decked out with all the latest technological advancements to ensure maximum comfort, finished off with an artistic touch. They also envisioned a community strongly influenced by symbolism, since, in the common elements of the park, they were trying to synthesize many of the political and religious ideals shared by patron and architect: therefore there are noticeable concepts originating from political Catalanism - especially in the entrance stairway where the Catalonian countries are represented - and from Catholicism - the Monumento al Calvario, originally designed to be a chapel. The mythological elements are so important: apparently Güell and Gaudí's conception of the park was also inspired by the Temple of Apollo of Delfos.
On the other hand, many experts have tried to link the park to various symbols because of the complex iconography that Gaudí applied to the urban project. Such references go from political vindication to religious exaltation, passing through mythology, history and philosophy. Specifically, many studies claim to see references to Freemasonry, which is highly unlikely due to the deep religious beliefs of both Gaudí and Count Güell, nor have these references been proven in the historiography of the modern architect. The multiplicity of symbols found in the Park Güell is, as previously mentioned, associated to political and religious signs, with a touch of mystery according to the preferences of that time for enigmas and puzzles.
Origins as a housing development.
The park was originally part of a commercially unsuccessful housing site, the idea of Count Eusebi Güell, after whom the park was named. It was inspired by the English garden city movement; hence the original English name 'Park' (in Catalan the name is 'Parc Güell'). The site was a rocky hill with little vegetation and few trees, called 'Muntanya Pelada' (Bare Mountain). It already included a large country house called Larrard House or Muntaner de Dalt House, and was next to a neighborhood of upper class houses called 'La Salut' (The Health). The intention was to exploit the fresh air (well away from smoky factories) and beautiful views from the site, with sixty triangular lots being provided for luxury houses. Count Eusebi Güell added to the prestige of the development by moving in 1906 to live in Larrard House. Ultimately, only two houses were built, neither designed by Gaudí. One was intended to be a show house, but on being completed in 1904 was put up for sale, and as no buyers came forward, Gaudí, at Güell's suggestion, bought it with his savings and moved in with his family and his father in 1906. This house, where Gaudí lived from 1906 to 1926, was built by Francesc Berenguer in 1904. It contains original works by Gaudí and several of his collaborators. It is now the Gaudi House Museum (Casa Museu Gaudí) since 1963. In 1969 it was declared a historical artistic monument of national interest.
Municipal garden.
It has since been converted into a municipal garden. It can be reached by underground railway (although the stations are at a distance from the Park and at a much lower level below the hill), by city buses, or by commercial tourist buses. From October 2013 the entrance to the Park is free but there is an entrance fee to visit the monumental zone (main entrance and the parts containing mosaics). Gaudí's house, 'la Torre Rosa,' — containing furniture that he designed — can be only visited for an another entrance fee. There is a reduced rate for those wishing to see both Gaudí's house and the Sagrada Família Church.
Park Güell is skillfully designed and composed to bring the peace and calm that one would expect from a park. The buildings flanking the entrance, though very original and remarkable with fantastically shaped roofs with unusual pinnacles, fit in well with the use of the park as pleasure gardens and seem relatively inconspicuous in the landscape when one considers the flamboyance of other buildings designed by Gaudí. One of these buildings houses a permanent exhibition of the Barcelona City History Museum MUHBA focused on the building itself, the park and the city.
The focal point of the park is the main terrace, surrounded by a long bench in the form of a sea serpent. The curves of the serpent bench form a number of enclaves, creating a more social atmosphere. Gaudí incorporated many motifs of Catalan nationalism, and elements from religious mysticism and ancient poetry, into the Park.
Roadways around the park to service the intended houses were designed by Gaudí as structures jutting out from the steep hillside or running on viaducts, with separate footpaths in arcades formed under these structures. This minimized the intrusion of the roads, and Gaudí designed them using local stone in a way that integrates them closely into the landscape. His structures echo natural forms, with columns like tree trunks supporting branching vaulting under the roadway, and the curves of vaulting and alignment of sloping columns designed in a similar way to his Church of Colònia Güell so that the inverted catenary arch shapes form perfect compression structures.
The large cross at the Park's high-point offers the most complete view of Barcelona and the bay. It is possible to view the main city in panorama, with the Sagrada Família and the Montjuïc area visible at a distance.
The park supports a wide variety of wildlife, notably several of the non-native species of parrot found in the Barcelona area. Other birds can be seen from the park, with records including Short-toed eagle. The park also supports a population of Hummingbird hawk moths.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1325'>
Casa Milà
Casa Milà (, ), also known as La Pedrera (, meaning the 'The Quarry'), is a modernist building located at 92, Passeig de Gràcia (passeig is Catalan for promenade) in Barcelona, Catalonia, Spain, at the corner of Carrer de Provença, in the Eixample. It was the last civil work designed by Catalan architect Antoni Gaudi and it was built between the years 1906 and 1910. In 1912, Gaudí and the Milà i Segimon marriage signed the contract of completion of the work of the Casa Milà.
It was commissioned by businessman Pere Milà i Camps and his wife Roser Segimon i Artells, from Reus and widow of the wealthy Indian Josep Guardiola i Grau. At the time it was very controversial because of the bold undulating stone facade and twisted wrought iron that decorate the balconies and windows, designed mostly by Josep Maria Jujol, who also designed some of the skies of plaster.
Architecturally it is considered an innovative work by having a structure of columns and floors free of load bearing walls. Similarly, the front – which is made of stone – is also self-supporting, i.e., not loads of floors. Another innovative element was the construction of the underground garage.
In 1984, it was declared a World Heritage Site by UNESCO. It is currently the headquarters of the Fundació-Catalunya La Pedrera, which manages the various exhibitions and activities done there and the public visits.
History.
Building owners.
Casa Milà was built for the married couple, Roser Segimon and Pere Milà. Roser Segimon was the wealthy widow of Josep Guardiola, an 'Indiano', a term applied to the Spaniards returning from the American colonies with tremendous wealth. Her second husband, Pere Milà, was a developer who was criticized for his flamboyant lifestyle and ridiculed by the contemporary residents of Barcelona, when they joked about his love of money and opulence, wondering if he was not rather more interested in 'the widow’s 'guardiola' (piggy bank), than in 'Guardiola’s widow'.
Pere Milà i Camps was a businessman, interested in arts and entertainment, owner of the Plaza de Toros Monumental, and would eventually enter politics. From a bourgeois family, snob and quite rich, especially because of his marriage to Roser Segimon i Artells. Milan wanted to stand out from the bourgeoisie of Barcelona, and this happened as he lived on Passeig de Gràcia in a fashionable building. Visiting the partner of his father in a business of hemp - called Josep Batlló - when the Casa Batlló was being built, he met Gaudí, and he assured him that his next work would be specially for him.
Construction process (1905-1910).
Milà bought at the corner of Passeig de Gràcia and Provence a house owned by José Antonio Ferrer-Vidal on June the 9th of 1905 . Gaudi was hired in September to do its new home, and on February 2nd of 1905 the project was presented to the City Council and the work began demolishing the existing building instead of reforming it, as in the case of the Casa Batlló. The building was completed in December 1910, and in October, 1911 the Milà moved there. Finally, on October 31st, 1912 they got from Gaudí the final certificate to rent the other floors of the building.
Gaudi, a Catholic and a devotee of the Virgin Mary, planned for the Casa Milà to be a spiritual symbol. Overt religious elements include an excerpt from the Rosary prayer on the cornice and planned statues of Mary, specifically Our Lady of the Rosary, and two archangels, St. Michael and St. Gabriel.
The design by Gaudi was not followed in some aspects. The local government objected to some
aspects of the project, fined the owners for many infractions of building codes, ordered the demolition of aspects exceeding the height standard for the city. The Encyclopedia of Twentieth Century Architecture states that the statuary was indeed Mary the mother of Jesus, also noting Gaudi's devoutness, and notes that the owner decided not to include it after Semana Trágica, an outbreak of anticlericalism in the city. After the decision was made to exclude the statuary of Mary and the archangels, Gaudi contemplated abandoning the project but was persuaded not to by a priest.
Property changes.
In 1940 Pere Milà died, and Roser Segimon sold the property in 1946. Monumental Peter shows the employer and the estate to Joseph Balañá Ballvé Pellisé and in partnership with family Pío Rubert Laporta, known for its department stores in the San Antonio round. The transaction resulted in 18 million pesetas for the building and formed the Compañía Inmobiliaria Provence' SA' (CIPS) to administer it. Roser Segimon continued to live on the main floor until her death in 1964.
The new property was divided the first floor of Provence in the street five floors instead of the original two. In 1953 they commissioned Juan Francisco Barba Corsini the construction of 13 apartments in the attic, which until then had been the laundry, increasingly used and had become an unsafe place, filled with garbage and rampoines . Barba Corsini respect Gaudí's original volume and structure, the Logis-freedom approach that gave the open space and no right angles. The apartments were located on the outer side of the space, leaving a corridor of the distribution curve of the arches that give central courtyard, leaving the darker area between the two courtyards as dealer floor. Apartments were 2 or 3 pieces, some with a loft living, with a design and furniture typical of the early 1950s, with materials such as brick, ceramic, wood and furniture design similar to that of Eero Saarinen as' Chair Quarry', among others. The works were supposed installing a chimney inadequate next to Gaudí's.
Installations and activities mixed with the neighboring houses in the early 1960s led to considerable losses of Gaudí's work, especially decorative elements. In 1966 the home was installed Northern Insurance Company, after which he settled a controversial bingo hall that would remain until 1985. He also installed an academy, offices or Inoxcrom cement mills, among others. maintenance costs were very high and their owners, as well as to give shape to more homes, left the building causing age some loosening of stones in 1971. Emergency repairs were made by Joseph Anton Comas respectful to the original, especially the painting of tweaking yards.
Restoration.
On July 24, 1969 Gaudí's work received official recognition as a historico-artistic Monument. It was a first step to prevent further destruction. Casa Milà was in poor condition in the early 1980s. It had been painted a dreary brown and many of its interior color schemes had been abandoned or allowed to deteriorate, but it has since been restored and many of the original colors revived.
In 1984 it was named a part of a World Heritage Site encompassing some of Gaudí's works. This began to change its protection. First the City Council tried to rent the main floor to install office for the 1992 Olympic bid. Finally, the day before Christmas 1986, Caixa de Catalunya bought La Pedrera for 900 million pesetas. On February 19th, 1987, urgently needed work began on the restoration and cleaning of the façade. The work was done by the architects Joseph Emilio Hernández-Cross and Rafael Vila. In 1990, as part of the Cultural Olympiad, the renovated main floor of the Milan exhibition Golden Square dedicated to modern architecture in the center of the Eixample opened.
Design.
The building is 1,323 m2 per floor on a plot of 1,620 m2. Gaudí began the first sketches in his workshop in the Sagrada Familia, where he conceived of this house as a constant curve, both outside and inside, incorporating multiple solutions of formal geometry and elements of a naturalistic nature.
Casa Milà is the result of two buildings, which are structured around two courtyards that provide light to the nine levels: basement, ground floor, mezzanine, main (or noble) floor, four upper floors, and an attic. The basement was intended to be the garage, the main floor the residence of the Milàs (a flat of all 1,323 m2), and the rest distributed over 20 homes for rent. The resulting layout is shaped like an asymmetrical '8' because of the different shape and size of the courtyards. The attic housed the laundry and drying areas, forming an insulating space for the building and simultaneously determining the levels of the roof.
One of the most significant parts of the building is the roof, crowned with skylights or staircase exits, fans, and chimneys. All of these elements, constructed with timbrel coated with limestone, broken marble or glass, have a specific architectural function, nevertheless, they have become real sculptures integrated into the building.
The building is a unique entity, where the shape of the exterior continues to the interior. The apartments feature ceilings with plaster reliefs of great dynamism, handcrafted wooden doors, windows, and furniture (sadly, now gone), and the design of the hydraulic pavement and different ornamental elements.
The stairways were intended for services, in that access to housing was by elevator except for the noble floor, where Gaudí added a staircase of a particular configuration.
Gaudi wanted the people who lived in the flats to all know each other. Therefore there were only lifts on every second floor so people had to communicate with one another on different floors.
Structure.
Regarding the structure, Casa Milà is characterized by its self-supporting stone facade, meaning that it is free of the functions of a load-bearing wall, which connects to the internal structure of each floor by means of curved iron beams surrounding the perimeter of each floor. This construction system allows, on one hand, large openings in the facade which give light to the homes, and on the other, free structuring of the different levels, so that all walls can be demolished without affecting the stability of the building. This allows the owners to change their minds at will and to modify, without problems, the interior layout of the homes.
Constructive and decorative items.
Facade.
The facade is composed of large blocks of limestone from the Garraf Massif to the first floor of the quarry Villefranche to the higher levels. The blocks were cut to follow the plot of the projection of the model, later raised to its location on just adjusted to align them in a continuous curvilinear texture to the pieces around them.
Viewed from the outside are three parts: the main body of the six-story blocks with winding stone floors both floors of a block back with a change of pace in waves similar to waves, with a texture more smooth and white, with small holes that seem gunboats, and finally the body of the roof.
The original facade of Gaudi gone some of the local bars downstairs. In 1928, the tailor Mosella opened the first store in La Pedrera, he works and eliminate the bars. This did not concern anyone, because in the middle of twentieth century, twisted ironwork had little importance. The ironwork was lost until a few years later, when Americans donated one of them to the MoMa, where it is on display.
Within restoration initiatives launched in 1987, the facade they rejoined him some pieces of stone that had fallen. In order to respect the fidelity of the original, material was obtained from the Quarry Villefranche, even though it was no longer operable.
Hall and courtyards.
The building has a completely original solution in solving the lobby to not being a closed and dark, but also for its open and airy courtyards connection with that equally important in gaining a place of transit and directly visible to the user accessing the building. There are two patios in the round side of the Paseo de Gracia and the elliptical street Provence.
Patios, structurally, are key as supporting loads of interior facades. The floor of the courtyard is supported by pillars of cast iron. In the courtyard elliptical beams and girders adopt a constructive solution traditional, but cylindrical, Gaudí applied an ingenious solution of using two concentric cylindrical beams stretched radial beams, like the spokes of a bicycle, they from a point outside of the beam to two points above and below-the-making functions of the central girder keystone and works in tension and compression simultaneously. Thus supported structure twelve feet in diameter with a piece of maximum beauty and considered 'the soul of the building' with a clear resemblance to the Gothic crypts.The centerpiece was built in a shipyard and Josep Maria Carandell assimilates to the wheel of steering, interpreting the intent of Gaudi represent the helm of the ship of life.
Access is protected by a massive gate iron with a design attributed to Jujol, it was common for people and cars, where access to the garage in the basement, now a in auditorium.
The two halls are fully polychrome with paintings oil on plaster surface, showing a repertoire eclectic references mythology and flowers.
During construction there appeared a problem adapting to the basement garage of cars, the new invention that thrilled the bourgeoisie. The future neighbor Felix Anthony Meadows, owner of Industrial Linera, requested a correction in access because its Rolls Royce could not access it. Gaudí agreed to remove a pillar on the ramp that led into the garage. So, Felix, who was establishing sales and factory Fontanella street in Walls of Valles could go to both places with your car from La Pedrera.
For the floors of Casa Milà, Gaudí used a model of floor forms of square timbers with two colors, and the hydraulic pavement hexagonal pieces of blue and sea motifs that had originally been designed for the Batllo house but which had not been used and recovered Gaudi the quarry. The wax was designed in gray John Bertrand under the supervision of Gaudí 'touched up with their own fingers,' in the words of the manufacturer Josep Bay.
Loft.
Like the Casa Batlló, Gaudí shows the application of the catenary arch as a support structure for the roof, a form which had also been used shortly after the owner itself, reinforcement of cooperative wood Mataró known as 'The Working Mataronense.' In this case, Gaudí used the Catalan technique of timbrel, imported from Italy in the fourteenth century.
In the attic were located in the laundry room under a translucent roof Catalan vault subject to 270 parabolic of varying height and about 80 cm apart. including the ribs seem that once an animal as large as a palm roof and an unconventional shape, similar to a landscape of hills and valleys. The shape and location of courtyards makes bows rises higher and lower when space is narrowed when the space expands.
The builder Bayó explained its construction: 'First batter and slid wide household. After Canaleta gave the opening of each arc and nailed a nail Bayó each starting point of the arc at the top of the wall. Of these keys dangled a chain so that the lowest point coincided with the arrow to the bow. Then the profile drawn on the wall, for he alone drew the string and this profile did formwork carpenter for the placed and did three rows of tiles placed plan. Gaudí wanted to add a longitudinal axis tile arches linking all the key '.
Roof and chimneys.
The work of Gaudí on the rooftop of La Pedrera was a collective of his experience at Palau Güell, but with solutions that were clearly more innovative – this time creating shapes and volumes with more body, more prominence, and less polychromasia. <Permanyer, either 1996 or 2008>
On the rooftop there are six skylights/staircase exits (four of which were covered with broken pottery and some that ended in a double cross typical of Gaudí), twenty-eight chimneys in several groupings (like were designed for Casa Batlló), twisted so that the smoke came out better, two half-hidden vents whose function is to renew the air in the building, crowning the walkway that goes around this dream castle, four cupulins (domes?) that discharged to the facade. The staircases also house the water tanks; some of these are snail-shaped.
The stepped roof of La Pedrera, called 'the garden of warriors' by the poet Pere Gimferrer because the chimneys appear to be protecting the skylights, has undergone a radical restoration, removing chimneys added in interventions after Gaudí, television antennas, and other elements that degraded the space. The restoration brought back the splendor to the chimneys and the skylights that were covered with fragments of marble and broken Valencia tiles. One of the chimneys was topped with glass pieces – it was said that Gaudí did that the day after the inauguration of the building, taking advantage of the empty bottles from the party. It was restored with the bases of champagne bottles from the early twentieth century. The repair work has enabled the restoration of the original impact of the overhangs made of stone from Ulldecona with fragments of tiles. This whole set is more colorful than the facade, although here the creamy tones are dominant.
Furniture.
Gaudí, as he had done in Casa Batlló, designed furniture specifically for the main floor. It was part of the concept artwork itself integral of modernism in which the architect assumes responsibility for global issues such as the structure and the facade, as every detail of the decor, design furniture and accessories such as lamps, planters, floors or ceilings.
This was another point of friction with Mrs. Milà, she complained that there was no straight wall to place your Steinway piano, which Roser Segimon played often and quite well. Gaudi's response was blunt: 'So play the violin.'
The result of these disagreements has been the loss of the decorative legacy of Gaudi, as furniture due to climate change and the distribution of the main floor which made the owner when Gaudí died. Some remain in private collections some spare parts like a curtain made of oak 4 m. long by 1.96 m. high you can see in the Museum of Catalan Modernism; a chair and desktop of Pere Milà and some other complementary element.
Regarding oak doors carved by dint of gouge bachelors Casas and Bard, only became the floor of Milà and the floor show, because when the lady met Milà in the price, was decided that they would do more of this quality.
Architecture.
'Casa Milà' is part of the UNESCO World Heritage Site 'Works of Antoni Gaudí'. It was a predecessor of some buildings with a similar biomorphic appearance:
Free exhibitions often are held on the first floor, which also provides some opportunity to see the interior design. There is a charge for entrance to the apartment on the fourth floor and the roof. The other floors are not open to visitors.
Constructive similarities.
Inspired Gaudí's La Pedrera on a mountain, but there is no agreement on which was the reference model. Joan Bergós thought it was the rocks of Fray Guerau to Prades mountains. Joan Matamala thought that the model could have been St. Miquel del Fai, while the sculptor Vicente Vilarubias believe was inspired by the cliffs Torrent Pareis to Menorca. Other options include the mountains of Uçhisar to Cappadocia believes that Juan Goytisolo or Mola to villages by Louis Permanyer, based on which Gaudi visited the area in 1885, fleeing an outbreak of cholera in Barcelona.
Some people say that the interior layout of the quarry comes from studies that Gaudí made of medieval fortresses. An image that is reinforced by the similarity of rooftop chimneys and 'sentinel' with great helmet coming out of the scales. structure of the iron door in the lobby flees follow any symmetry, straight or repetitive pattern. Rather, his vision evokes bubbles soap that are formed between the hands or structures plant cell.
Criticism and controversy.
The building's unconventional style made it the subject of much criticism. It was given the nickname 'La Pedrera'. Casa Milà appeared in many satirical magazines. Joan Junceda presented it as a traditional 'Easter cake' by means of cartoons in 'Patufet'. Joaquim Garcia made a joke about the difficulty of setting the damask wrought iron balconies in his magazine. Homeowners in Passeig de Gracia became angry with Milà and ceased to say hello to him, arguing that the weird building by Gaudi would lower the price of land in the area.
Casa Milà caused some administrative problems too, on December 1907 the City Hall stopped work on the building because of a pillar which occupied part of the sidewalk, not respecting the alignment of facades. Again on August 17, 1908, more problems occurred when the building surpassed the predicted height and borders of its construction site by . The Council called for a fine of 100,000 pesetas (approximately 25% of the cost of work) or for the demolition of the attic and roof. The dispute was resolved a year and a half later, December 28 of 1909, when the Commission certified that it was a monumental building and thus not required to have a 'strict compliance with the bylaws.
The owner introduced him to artistic buildings annual contest of City Council where this year chose two works Sagnier (Calle Mallorca, 264 and Corsica with Diagonal), the House Guster, who was a particular house of the architect James Gustin and Perez Samanillo house, designed by Hervás and Arizmendi. Although the most dramatic and clear favorite was the house Milan, the jury ruled stating that 'even be finished facades take much to make you fully completed, finalized and perfect state of appreciation.' The winner in 1910 was Samanillo Perez, now the' Equestrian Circle'.
Gaudi's relations with Roser Segimon deteriorated during the construction and decoration of the house. There were many disagreements between them, one example was the monumental bronze virgin del Rosario, which Gaudí wanted as the front head in homage to the name of the owner (Roser Segimon), that the artist Carles Mani i Roig was to sculpt. The statue was not made although the words 'Marian Ave gratia' M' full Dominus tecum' were indeed written at the top of the facade. The continuing disagreements led Gaudí to take Mila to court for over his fees. The lawsuit was won by Gaudí in 1916, and he gave the 105,000 pesetas he won in the case to charity, stating that 'the principles mattered more than money.' Milà was having to pay mortgage quarry.
After Gaudí's death in 1926, Roser Segimon got rid of most of the furniture that Gaudí had designed and covered over parts of Gaudí's designs with new decorations in the style of Louis XVI. When la Pedrera was acquired by Safety of Catalonia and the restoration done in 1990, some of the original decorations reemerged.
When he started the Civil War in July 1936, they were on vacation in Milan Blanes. Some local ground floor of the quarry were collectivized by the Unified Socialist Party of Catalonia and Milàs were forced to flee the area Franco leaving home after saving some artwork.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1327'>
Antiparticle
Corresponding to most kinds of particles, there is an associated antiparticle with the same mass and opposite charge (including electric charge). For example, the antiparticle of the electron is the positively charged electron, or positron, which is produced naturally in certain types of radioactive decay.
The laws of nature are very nearly symmetrical with respect to particles and antiparticles. For example, an antiproton and a positron can form an antihydrogen atom, which is believed to have the same properties as a hydrogen atom. This leads to the question of why the formation of matter after the Big Bang resulted in a universe consisting almost entirely of matter, rather than being a half-and-half mixture of matter and antimatter. The discovery of Charge Parity violation helped to shed light on this problem by showing that this symmetry, originally thought to be perfect, was only approximate.
Particle-antiparticle pairs can annihilate each other, producing photons; since the charges of the particle and antiparticle are opposite, total charge is conserved. For example, the positrons produced in natural radioactive decay quickly annihilate themselves with electrons, producing pairs of gamma rays, a process exploited in positron emission tomography.
Antiparticles are produced naturally in beta decay, and in the interaction of cosmic rays in the Earth's atmosphere. Because charge is conserved, it is not possible to create an antiparticle without either destroying a particle of the same charge (as in beta decay) or creating a particle of the opposite charge. The latter is seen in many processes in which both a particle and its antiparticle are created simultaneously, as in particle accelerators. This is the inverse of the particle-antiparticle annihilation process.
Although particles and their antiparticles have opposite charges, electrically neutral particles need not be identical to their antiparticles. The neutron, for example, is made out of quarks, the antineutron from antiquarks, and they are distinguishable from one another because neutrons and antineutrons annihilate each other upon contact. However, other neutral particles are their own antiparticles, such as photons, the hypothetical gravitons, and some WIMPs.
History.
Experiment.
In 1932, soon after the prediction of positrons by Paul Dirac, Carl D. Anderson found that cosmic-ray collisions produced these particles in a cloud chamber— a particle detector in which moving electrons (or positrons) leave behind trails as they move through the gas. The electric charge-to-mass ratio of a particle can be measured by observing the radius of curling of its cloud-chamber track in a magnetic field. Positrons, because of the direction that their paths curled, were at first mistaken for electrons travelling in the opposite direction. Positron paths in a cloud-chamber trace the same helical path as an electron but rotate in the opposite direction with respect to the magnetic field direction due to their having the same magnitude of charge-to-mass ratio but with opposite charge and, therefore, opposite signed charge-to-mass ratios.
The antiproton and antineutron were found by Emilio Segrè and Owen Chamberlain in 1955 at the University of California, Berkeley. Since then, the antiparticles of many other subatomic particles have been created in particle accelerator experiments. In recent years, complete atoms of antimatter have been assembled out of antiprotons and positrons, collected in electromagnetic traps.
Hole theory.
Solutions of the Dirac equation contained negative energy quantum states. As a result, an electron could always radiate energy and fall into a negative energy state. Even worse, it could keep radiating infinite amounts of energy because there were infinitely many negative energy states available. To prevent this unphysical situation from happening, Dirac proposed that a 'sea' of negative-energy electrons fills the universe, already occupying all of the lower-energy states so that, due to the Pauli exclusion principle, no other electron could fall into them. Sometimes, however, one of these negative-energy particles could be lifted out of this Dirac sea to become a positive-energy particle. But, when lifted out, it would leave behind a 'hole' in the sea that would act exactly like a positive-energy electron with a reversed charge. These he interpreted as 'negative-energy electrons' and attempted to identify them with protons in his 1930 paper 'A Theory of Electrons and Protons' However, these 'negative-energy electrons' turned out to be positrons, and not protons.
Dirac was aware of the problem that his picture implied an infinite negative charge for the universe. Dirac tried to argue that we would perceive this as the normal state of zero charge. Another difficulty was the difference in masses of the electron and the proton. Dirac tried to argue that this was due to the electromagnetic interactions with the sea, until Hermann Weyl proved that hole theory was completely symmetric between negative and positive charges. Dirac also predicted a reaction + → + , where an electron and a proton annihilate to give two photons. Robert Oppenheimer and Igor Tamm proved that this would cause ordinary matter to disappear too fast. A year later, in 1931, Dirac modified his theory and postulated the positron, a new particle of the same mass as the electron. The discovery of this particle the next year removed the last two objections to his theory.
However, the problem of infinite charge of the universe remains. Also, as we now know, bosons also have antiparticles, but since bosons do not obey the Pauli exclusion principle (only fermions do), hole theory does not work for them. A unified interpretation of antiparticles is now available in quantum field theory, which solves both these problems.
Particle-antiparticle annihilation.
If a particle and antiparticle are in the appropriate quantum states, then they can annihilate each other and produce other particles. Reactions such as + → + (the two-photon annihilation of an electron-positron pair) are an example. The single-photon annihilation of an electron-positron pair, + → , cannot occur in free space because it is impossible to conserve energy and momentum together in this process. However, in the Coulomb field of a nucleus the translational invariance is broken and single-photon annihilation may occur. The reverse reaction (in free space, without an atomic nucleus) is also impossible for this reason. In quantum field theory, this process is allowed only as an intermediate quantum state for times short enough that the violation of energy conservation can be accommodated by the uncertainty principle. This opens the way for virtual pair production or annihilation in which a one particle quantum state may 'fluctuate' into a two particle state and back. These processes are important in the vacuum state and renormalization of a quantum field theory. It also opens the way for neutral particle mixing through processes such as the one pictured here, which is a complicated example of mass renormalization.
Properties of antiparticles.
Quantum states of a particle and an antiparticle can be interchanged by applying the charge conjugation (C), parity (P), and time reversal (T) operators. If formula_1 denotes the quantum state of a particle (n) with momentum p, spin J whose component in the z-direction is σ, then one has
where nc denotes the charge conjugate state, 'i.e.', the antiparticle. This behaviour under CPT is the same as the statement that the particle and its antiparticle lie in the same irreducible representation of the Poincaré group. Properties of antiparticles can be related to those of particles through this. If T is a good symmetry of the dynamics, then
where the proportionality sign indicates that there might be a phase on the right hand side. In other words, particle and antiparticle must have
Quantum field theory.
'This section draws upon the ideas, language and notation of canonical quantization of a quantum field theory.'
One may try to quantize an electron field without mixing the annihilation and creation operators by writing
where we use the symbol 'k' to denote the quantum numbers 'p' and σ of the previous section and the sign of the energy, 'E(k)', and 'ak' denotes the corresponding annihilation operators. Of course, since we are dealing with fermions, we have to have the operators satisfy canonical anti-commutation relations. However, if one now writes down the Hamiltonian
then one sees immediately that the expectation value of 'H' need not be positive. This is because 'E(k)' can have any sign whatsoever, and the combination of creation and annihilation operators has expectation value 1 or 0.
So one has to introduce the charge conjugate 'antiparticle' field, with its own creation and annihilation operators satisfying the relations
where 'k' has the same 'p', and opposite σ and sign of the energy. Then one can rewrite the field in the form
where the first sum is over positive energy states and the second over those of negative energy. The energy becomes
where 'E0' is an infinite negative constant. The vacuum state is defined as the state with no particle or antiparticle, 'i.e.', formula_11 and formula_12. Then the energy of the vacuum is exactly 'E0'. Since all energies are measured relative to the vacuum, H is positive definite. Analysis of the properties of 'ak' and 'bk' shows that one is the annihilation operator for particles and the other for antiparticles. This is the case of a fermion.
This approach is due to Vladimir Fock, Wendell Furry and Robert Oppenheimer. If one quantizes a real scalar field, then one finds that there is only one kind of annihilation operator; therefore, real scalar fields describe neutral bosons. Since complex scalar fields admit two different kinds of annihilation operators, which are related by conjugation, such fields describe charged bosons.
Feynman–Stueckelberg interpretation.
By considering the propagation of the negative energy modes of the electron field backward in time, Ernst Stueckelberg reached a pictorial understanding of the fact that the particle and antiparticle have equal mass m and spin J but opposite charges q. This allowed him to rewrite perturbation theory precisely in the form of diagrams. Richard Feynman later gave an independent systematic derivation of these diagrams from a particle formalism, and they are now called Feynman diagrams. Each line of a diagram represents a particle propagating either backward or forward in time. This technique is the most widespread method of computing amplitudes in quantum field theory today.
Since this picture was first developed by Ernst Stueckelberg, and acquired its modern form in Feynman's work, it is called the 'Feynman-Stueckelberg interpretation' of antiparticles to honor both scientists.
As a consequence of this interpretation, Villata argued that the assumption of antimatter as CPT-transformed matter would imply that the gravitational interaction between matter and antimatter is repulsive.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1331'>
Arabian Prince
Mik Lezan (born June 17, 1965), better known by his stage name Arabian Prince, is an American rapper and hip hop producer, best known for being an original member of the rap group, N.W.A.
Biography.
He started working with Bobby Jimmy & the Critters in 1984. He also produced the hit single and album for JJ Fad, 'Supersonic.'
He was a founding member of N.W.A but when fellow member Ice Cube came back from the Phoenix Institute of Technology in 1988 he found himself to be surplus to the band — Eazy-E, Ice Cube and MC Ren were the main performers, DJ Yella was the turntablist and Dr. Dre was the main producer.
After leaving N.W.A, Arabian Prince began his solo career. His first solo album 'Brother Arab' was released in 1989, although it sold poorly.
Arabian Prince continued his solo career and released his fourth album 'Where's My Bytches' in 1993, which was his last album of the 1990s.
He recently started releasing music again with his Professor X project on the Dutch label Clone records. In 2007, he performed as a DJ on the 2K Sports Holiday Bounce Tour with artists from the Stones Throw label. In 2008, Stones Throw released a compilation of his electro-rap material from the 1980s. One of his songs was included on the 2007 video game 'College Hoops 2K8'.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1332'>
August 7
This day marks the approximate midpoint of summer in the Northern Hemisphere and of winter in the Southern Hemisphere (starting the season at the June solstice).
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1333'>
August 8
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1334'>
April 16
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1335'>
Associative property
In mathematics, the associative property is a property of some binary operations. In propositional logic, associativity is a valid rule of replacement for expressions in logical proofs.
Within an expression containing two or more occurrences in a row of the same associative operator, the order in which the operations are performed does not matter as long as the sequence of the operands is not changed. That is, rearranging the parentheses in such an expression will not change its value. Consider the following equations:
Even though the parentheses were rearranged, the values of the expressions were not altered. Since this holds true when performing addition and multiplication on any real numbers, it can be said that 'addition and multiplication of real numbers are associative operations.'
Associativity is not to be confused with commutativity, which addresses whether 'a × b = b × a'.
Associative operations are abundant in mathematics; in fact, many algebraic structures (such as semigroups and categories) explicitly require their binary operations to be associative.
However, many important and interesting operations are non-associative; some examples include subtraction, exponentiation and the vector cross product. In contrast to the theoretical counterpart, the addition of floating point numbers in computer science is not associative, and is an important source of rounding error.
Definition.
Formally, a binary operation formula_3 on a set 'S' is called associative if it satisfies the associative law:
Here, formula_3 is used to replace the symbol of the operation, which may be any symbol, and even the absence of symbol like for the multiplication.
The associative law can also be expressed in functional notation thus: formula_7.
Generalized associative law.
If a binary operation is associative, repeated application of the operation produces the same result regardless how valid pairs of parenthesis are inserted in the expression. This is called the generalized associative law. For instance, a product of four elements may be written in five possible ways:
If the product operation is associative, the generalized associative law says that all these formulas will yield the same result, making the parenthesis unnecessary. Thus 'the' product can be written unambiguously as
As the number of elements increases, the number of possible ways to insert parentheses grows quickly, but they remain unnecessary for disambiguation.
Examples.
Some examples of associative operations include the following.
Propositional logic.
Rule of replacement.
In standard truth-functional propositional logic, 'association', or 'associativity' are two valid rules of replacement. The rules allow one to move parentheses in logical expressions in logical proofs. The rules are:
and
where 'formula_15' is a metalogical symbol representing 'can be replaced in a proof with.'
Truth functional connectives.
'Associativity' is a property of some logical connectives of truth-functional propositional logic. The following logical equivalences demonstrate that associativity is a property of particular connectives. The following are truth-functional tautologies.
Associativity of disjunction:
Associativity of conjunction:
Associativity of equivalence:
Non-associativity.
A binary operation formula_3 on a set 'S' that does not satisfy the associative law is called non-associative. Symbolically,
For such an operation the order of evaluation 'does' matter. For example:
Also note that infinite sums are not generally associative, for example:
whereas
The study of non-associative structures arises from reasons somewhat different from the mainstream of classical algebra. One area within non-associative algebra that has grown very large is that of Lie algebras. There the associative law is replaced by the Jacobi identity. Lie algebras abstract the essential nature of infinitesimal transformations, and have become ubiquitous in mathematics.
There are other specific types of non-associative structures that have been studied in depth. They tend to come from some specific applications. Some of these arise in combinatorial mathematics. Other examples: Quasigroup, Quasifield, Nonassociative ring.
Nonassociativity of floating point calculation.
In mathematics, addition and multiplication of real numbers is associative. By contrast, in computer science, the addition and multiplication of floating point numbers is 'not' associative, as rounding errors are introduced when dissimilar-sized values are joined together.
To illustrate this, consider a floating point representation with a 4-bit mantissa:
<br>(1.0002×20 +
1.0002×20) +
1.0002×24 =
1.0002×21 +
1.0002×24 =
1.002×24
<br>1.0002×20 +
(1.0002×20 +
1.0002×24) =
1.0002×20 +
1.002×24 =
1.002×24
Even though most computers compute with a 24 or 53 bits of mantissa, this is an important source of rounding error, and approaches such as the Kahan Summation Algorithm are ways to minimise the errors. It can be especially problematic in parallel computing.
Notation for non-associative operations.
In general, parentheses must be used to indicate the order of evaluation if a non-associative operation appears more than once in an expression. However, mathematicians agree on a particular order of evaluation for several common non-associative operations. This is simply a notational convention to avoid parentheses.
A left-associative operation is a non-associative operation that is conventionally evaluated from left to right, i.e.,
while a right-associative operation is conventionally evaluated from right to left:
Both left-associative and right-associative operations occur. Left-associative operations include the following:
Right-associative operations include the following:
Non-associative operations for which no conventional evaluation order is defined include the following.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1336'>
Apache Software Foundation
The Apache Software Foundation (ASF) is an American non-profit corporation (classified as 501(c)(3) in the United States) to support Apache software projects, including the Apache HTTP Server. The ASF was formed from the Apache Group and incorporated in Delaware, U.S., in June 1999.
The Apache Software Foundation is a decentralized community of developers. The software they produce is distributed under the terms of the Apache License and is therefore free and open source software (FOSS). The Apache projects are characterized by a collaborative, consensus-based development process and an open and pragmatic software license. Each project is managed by a self-selected team of technical experts who are active contributors to the project. The ASF is a meritocracy, implying that membership of the foundation is granted only to volunteers who have actively contributed to Apache projects. The ASF is considered a second generation open-source organization, in that commercial support is provided without the risk of platform lock-in.
Among the ASF's objectives are: to provide legal protection to volunteers working on Apache projects; to prevent the 'Apache' brand name from being used by other organizations without permission.
The ASF also holds several ApacheCon conferences each year, highlighting Apache projects, related technology, and encouraging Apache developers to gather together.
History.
The history of the Apache Software Foundation is linked to the Apache HTTP Server, development beginning in February 1995. A group of eight developers started working on enhancing the NCSA HTTPd daemon. They came to be known as the Apache Group. On March 25, 1999, the Apache Software Foundation was formed. The first official meeting of the Apache Software Foundation was held on April 13, 1999, and by general consent that the initial membership list of the Apache Software Foundation, would be: Brian Behlendorf, Ken Coar, Miguel Gonzales, Mark Cox, Lars Eilebrecht, Ralf S. Engelschall, Roy T. Fielding, Dean Gaudet, Ben Hyde, Jim Jagielski, Alexei Kosut, Martin Kraemer, Ben Laurie, Doug MacEachern, Aram Mirzadeh, Sameer Parekh, Cliff Skolnick, Marc Slemko, William (Bill) Stoddard, Paul Sutton, Randy Terbush and Dirk-Willem van Gulik. After a series of additional meetings to elect board members and resolve other legal matters regarding incorporation, the effective incorporation date of the Apache Software Foundation was set to June 1, 1999.
The name 'Apache' was chosen from respect for the Native American Apache Nation, well known for their superior skills in warfare strategy and their inexhaustible endurance. It also makes a pun on 'a patchy web server'—a server made from a series of patches—but this was not its origin. The group of developers who released this new software soon started to call themselves the 'Apache Group'.
Projects.
Apache divides its software development activities into separate semi-autonomous areas called 'top-level projects' (formally known as a 'Project Management Committee' in the bylaws), some of which have a number of sub-projects. Unlike some other organizations that host FOSS projects, before a project is hosted at Apache it has to be licensed to the ASF with a grant or contributor agreement. In this way, the ASF gains the necessary intellectual property rights for the development and distribution of all its projects.
Board of directors.
The ASF board of directors has responsibility for overseeing the ASF's activities and acting as a central point of contact and communication for its projects. The board assigns corporate issues, assigning resources to projects, and manages corporate services, including funds and legal issues. It does not make technical decisions about individual projects; these are made by the individual Project Management Committees. The board is elected annually by members of the foundation and, after the May 2014 Annual Members Meeting, it consists of:
Financials.
In the 2010–11 fiscal year, the Foundation took in $539,410, almost entirely from grants and contributions with $12,349 from two ApacheCons. With no employees and 2,663 volunteers, it spent $270,846 on infrastructure, $92,364 on public relations, and $17,891 on two ApacheCons.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1338'>
Americans with Disabilities Act of 1990
The Americans with Disabilities Act of 1990 (ADA) is a law that was enacted by the U.S. Congress in 1990. Senator Tom Harkin (D-IA) authored the bill and was its chief sponsor in the Senate. Harkin delivered part of his introduction speech in sign language, saying it was so his deaf brother could understand. It was signed into law on July 26, 1990, by President George H. W. Bush, and later amended with changes effective January 1, 2009.
The ADA is a wide-ranging civil rights law that prohibits discrimination based on disability. It affords similar protections against discrimination to Americans with disabilities as the Civil Rights Act of 1964, which made discrimination based on race, religion, sex, national origin, and other characteristics illegal. In addition, unlike the Civil Rights Act, the ADA also requires covered employers to provide reasonable accommodations to employees with disabilities, and imposes accessibility requirements on public accommodations.
ADA disabilities include both mental and physical medical conditions. A condition does not need to be severe or permanent to be a disability. Equal Employment Opportunity Commission regulations provide a list of conditions that should easily be concluded to be disabilities: deafness, blindness, an intellectual disability (formerly termed mental retardation), partially or completely missing limbs or mobility impairments requiring the use of a wheelchair, autism, cancer, cerebral palsy, diabetes, epilepsy, Human Immunodeficiency Virus (HIV) infection, multiple sclerosis, muscular dystrophy, major depressive disorder, bipolar disorder, post-traumatic stress disorder, obsessive compulsive disorder, and schizophrenia. Other mental or physical health conditions also may be disabilities, depending on what the individual's symptoms would be in the absence of 'mitigating measures' (medication, therapy, assistive devices, or other means of restoring function), during an 'active episode' of the condition (if the condition is episodic). Certain specific conditions, such as kleptomania, pedophilia, exhibitionism, voyeurism, etc. are excluded under the definition of 'disability' in order to prevent abuse of the statute's purpose, however other specific conditions, such as gender identity disorders for instance, are also excluded under the definition of 'disability', thus the present wording of the ADA actually encourages discriminatory practices rather than discouraging them.
Titles.
Title I—employment.
The ADA states that a 'covered entity' shall not discriminate against 'a qualified individual with a disability'. This applies to job application procedures, hiring, advancement and discharge of employees, job training, and other terms, conditions, and privileges of employment. 'Covered entities' include employers with 15 or more employees, as well as employment agencies, labor organizations, and joint labor-management committees. There are strict limitations on when a covered entity can ask job applicants or employees disability-related questions or require them to undergo medical examination, and all medical information must be kept confidential.
Prohibited discrimination may include, among other things, firing or refusing to hire someone based on a real or perceived disability, segregation, and harassment based on a disability. Covered entities are also required to provide reasonable accommodations to job applicants and employees with disabilities. A reasonable accommodation is a change in the way things are typically done that the person needs because of a disability, and can include, among other things, special equipment that allows the person to perform the job, scheduling changes, and changes to the way work assignments are chosen or communicated. An employer is not required to provide an accommodation that would involve undue hardship (significant difficulty or expense), and the individual who receives the accommodation must still perform the essential functions of the job and meet the normal performance requirements. An employee or applicant who currently engages in the illegal use of drugs is not considered 'qualified' when a covered entity takes adverse action based on such use.
Part of Title I was found unconstitutional by the United States Supreme Court as it pertains to states in the case of 'Board of Trustees of the University of Alabama v. Garrett' as violating the sovereign immunity rights of the several states as specified by the Eleventh Amendment to the United States Constitution. The provision allowing private suits against states for money damages was invalidated.
Title II—public entities (and public transportation).
Title II prohibits disability discrimination by all public entities at the local (i.e. school district, municipal, city, county) and state level. Public entities must comply with Title II regulations by the U.S. Department of Justice. These regulations cover access to all programs and services offered by the entity. Access includes physical access described in the ADA Standards for Accessible Design and programmatic access that might be obstructed by discriminatory policies or procedures of the entity.
Title II applies to public transportation provided by public entities through regulations by the U.S. Department of Transportation. It includes the National Railroad Passenger Corporation, along with all other commuter authorities. This section requires the provision of paratransit services by public entities that provide fixed route services.
Title II also applies to all state and local public housing, housing assistance, and housing referrals. The Office of Fair Housing and Equal Opportunity is charged with enforcing this provision.
Title III—public accommodations (and commercial facilities).
Under Title III, no individual may be discriminated against on the basis of disability with regards to the full and equal enjoyment of the goods, services, facilities, or accommodations of any place of 'public accommodation' by any person who owns, leases, or operates a place of 'public accommodation'. 'Public accommodations' include most places of lodging (such as inns and hotels), recreation, transportation, education, and dining, along with stores, care providers, and places of public displays.
Under Title III of the ADA, all 'new construction' (construction, modification or alterations) after the effective date of the ADA (approximately July 1992) must be fully compliant with the Americans With Disabilities Act Accessibility Guidelines (ADAAG) found in the Code of Federal Regulations at 28 C.F.R., Part 36, Appendix 'A'.
Title III also has application to existing facilities. One of the definitions of 'discrimination' under Title III of the ADA is a 'failure to remove' architectural barriers in existing facilities. See . This means that even facilities that have not been modified or altered in any way after the ADA was passed still have obligations. The standard is whether 'removing barriers' (typically defined as bringing a condition into compliance with the ADAAG) is 'readily achievable', defined as '..easily accomplished without much difficulty or expense.'
The statutory definition of 'readily achievable' calls for a balancing test between the cost of the proposed 'fix' and the wherewithal of the business and/or owners of the business. Thus, what might be 'readily achievable' for a sophisticated and financially capable corporation might not be readily achievable for a small or local business.
There are exceptions to this title; many private clubs and religious organizations may not be bound by Title III. With regard to historic properties (those properties that are listed or that are eligible for listing in the National Register of Historic Places, or properties designated as historic under state or local law), those facilities must still comply with the provisions of Title III of the ADA to the 'maximum extent feasible' but if following the usual standards would 'threaten to destroy the historic significance of a feature of the building' then alternative standards may be used.
Under 2010 revisions of Department of Justice regulations, newly constructed or altered swimming pools, wading pools, and spas must have an accessible means of entrance and exit to pools for disabled people. However, the requirement is conditioned on whether providing access through a fixed lift is 'readily achievable.' Other requirements exist, based on pool size, include providing a certain number of accessible means of entry and exit, which are outlined in Section 242 of the standards. However, businesses are free to consider the differences in application of the rules depending on whether the pool is new or altered, or whether the swimming pool was in existence before the effective date of the new rule. Full compliance may not be required for existing facilities; Section 242 and 1009 of the 2010 Standards outline such exceptions.
Title IV—telecommunications.
Title IV of the ADA amended the landmark Communications Act of 1934 primarily by adding section . This section requires that all telecommunications companies in the U.S. take steps to ensure functionally equivalent services for consumers with disabilities, notably those who are deaf or hard of hearing and those with speech impairments. When Title IV took effect in the early 1990s, it led to the installation of public teletypewriter (TTY) machines and other TDD (telecommunications devices for the deaf). Title IV also led to the creation, in all 50 states and the District of Columbia, of what were then called dual-party relay services and now are known as telecommunications relay services (TRS), such as STS relay. Today, many TRS-mediated calls are made over the Internet by consumers who use broadband connections. Some are video relay service (VRS) calls, while others are text calls. In either variation, communication assistants translate between the signed or typed words of a consumer and the spoken words of others. In 2006, according to the Federal Communications Commission (FCC), VRS calls averaged two million minutes a month.
Title V—miscellaneous provisions.
Title V includes technical provisions. It discusses, for example, the fact that nothing in the ADA amends, overrides or cancels anything in Section 504. Additionally, Title V includes an anti-retaliation or coercion provision. The 'Technical Assistance Manual' for the ADA explains it: 'III-3.6000 Retaliation or coercion. Individuals who exercise their rights under the ADA, or assist others in exercising their rights, are protected from retaliation. The prohibition against retaliation or coercion applies broadly to any individual or entity that seeks to prevent an individual from exercising his or her rights or to retaliate against him or her for having exercised those rights .. Any form of retaliation or coercion, including threats, intimidation, or interference, is prohibited if it is intended to interfere.
History.
ADA Amendments Act.
The ADA defines a covered disability as a physical or mental impairment that substantially limits one or more major life activities, a history of having such an impairment, or being regarded as having such an impairment. The Equal Employment Opportunity Commission (EEOC) was charged with interpreting the 1990 law with regard to discrimination in employment. Prior to 2011, its regulations narrowed 'substantially limits' to 'significantly or severely restricts.'
On September 25, 2008, President George W. Bush signed the ADA Amendments Act of 2008 (ADAAA) into law. The amendment broadened the definition of 'disability,' thereby extending the ADA's protections to a greater number of people. The ADAAA also added to the ADA examples of 'major life activities' including, but not limited to, 'caring for oneself, performing manual tasks, seeing, hearing, eating, sleeping, walking, standing, lifting, bending, speaking, breathing, learning, reading, concentrating, thinking, communicating, and working' as well as the operation of several specified 'major bodily functions'. The act overturned a 1999 U.S. Supreme Court case that held that an employee was not disabled if the impairment could be corrected by mitigating measures; it specifically provides that such impairment must be determined without considering such ameliorative measures. Another court restriction overturned was the interpretation that an impairment that substantially limits one major life activity must also limit others to be considered a disability.
The ADAAA led to broader coverage of impaired employees. The United States House Committee on Education and Labor states that the amendment '..makes it absolutely clear that the ADA is intended to provide broad coverage to protect anyone who faces discrimination on the basis of disability.'
'Capitol Crawl'.
Shortly before the act was passed, disability rights activists with physical disabilities coalesced in front of the Capitol Building, shed their crutches, wheelchairs, powerchairs and other assistive devices, and immediately proceeded to crawl and pull their bodies up all 100 of the Capitol's front steps, without warning. As the activists did so, many of them chanted 'ADA now,' and 'Vote. Now,' Some activists who remained at the bottom of the steps held signs and yelled words of encouragement at the 'Capitol Crawlers.' Jennifer Keelan, a second grader from Denver with cerebral palsy, was videotaped as she pulled herself up the steps, using mostly her hands and arms, saying 'I'll take all night if I have to.' This direct action is reported to have 'inconvenienced' several senators and to have pushed them to approve the act. While there are those who do not attribute much overall importance to this action, the 'Capitol Crawl' of 1990 is seen by many present-day disability activists in the United States as being the single action most responsible for 'forcing' the ADA into law.
Opposition from religious groups.
The debate over the Americans with Disabilities Act led some religious groups to take opposite positions.
Some religious groups, such as the Association of Christian Schools International, opposed the ADA in its original form. ACSI opposed the act primarily because the ADA labeled religious institutions 'public accommodations', and thus would have required churches to make costly structural changes to ensure access for all. The cost argument advanced by ACSI and others prevailed in keeping religious institutions from being labeled as 'public accommodations', and thus churches were permitted to remain inaccessible.
In addition to opposing the ADA on grounds of cost, church groups such as the National Association of Evangelicals testified against the ADA's Title I (employment) provisions on grounds of religious liberty. The NAE believed the regulation of the internal employment of churches was '.. an improper intrusion [of] the federal government.'
Opposition from business interests.
Many members of the business community opposed the passage of the Americans with Disabilities Act. Testifying before Congress, Greyhound Bus Lines stated that the act had the potential to '..deprive millions of people of affordable intercity public transportation and thousands of rural communities of their only link to the outside world.' The US Chamber of Commerce argued that the costs of the ADA would be 'enormous' and have 'a disastrous impact on many small businesses struggling to survive.' The National Federation of Independent Businesses, an organization that lobbies for small businesses, called the ADA 'a disaster for small business.' Pro-business conservative commentators joined in opposition, writing that the Americans with Disabilities Act was 'an expensive headache to millions' that would not necessarily improve the lives of people with disabilities.
Quotations.
On signing the measure, George H. W. Bush said:
About the importance of making employment opportunities inclusive, Shirley Davis, director of global diversity and inclusion at the Society for Human Resource Management, said:
Criticism.
Employment.
The ADA has been criticized on the grounds that it decreases the employment rate for people with disabilities and raises the cost of doing business for employers, in large part due to the additional legal risks, which employers avoid by quietly avoiding hiring people with disabilities. Some researchers believe that the law has been ineffectual. Between 1991 (after its enactment) and 1995, the ADA caused a 7.8% drop in the employment rate of men with disabilities regardless of age, educational level, and type of disability, with the most affected being young, less-educated and mentally disabled men.
In 2001, for men of all working ages and women under 40, Current Population Survey data showed a sharp drop in the employment of disabled workers, with the ADA as a likely cause.
However, in 2005 the rate of employment among disabled people increase to 45% of the population of disabled people.
'Professional plaintiffs'.
Since enforcement of the act began in July 1992, it has quickly become a major component of employment law. The ADA allows private plaintiffs to receive only injunctive relief (a court order requiring the public accommodation to remedy violations of the accessibility regulations) and attorneys' fees, and does not provide monetary rewards to private plaintiffs who sue non-compliant businesses. Unless a state law, such as the California Unruh Civil Rights Act, provides for monetary damages to private plaintiffs, persons with disabilities do not obtain direct financial benefits from suing businesses that violate the ADA.
The attorneys' fees provision of Title III does provide incentive for lawyers to specialize and engage in serial ADA litigation, but a disabled plaintiff does not obtain financial reward from attorneys' fees unless they act as their own attorney, or as mentioned above, a disabled plaintiff resides in a state that provides for minimum compensation and court fees in lawsuits. Moreover, there may be a benefit to these 'private attorneys general' who identify and compel the correction of illegal conditions: they may increase the number of public accommodations accessible to persons with disabilities. 'Civil rights law depends heavily on private enforcement. Moreover, the inclusion of penalties and damages is the driving force that facilitates voluntary compliance with the ADA.' Courts have noted: 'As a result, most ADA suits are brought by a small number of private plaintiffs who view themselves as champions of the disabled. For the ADA to yield its promise of equal access for the disabled, it may indeed be necessary and desirable for committed individuals to bring serial litigation advancing the time when public accommodations will be compliant with the ADA.'
However, in states that have enacted laws that allow private individuals to win monetary awards from non-compliant businesses, 'professional plaintiffs' are typically found. At least one of these plaintiffs in California has been barred by courts from filing lawsuits unless he receives prior court permission. In these states a large number of frivolous complaints are filed. Through the end of fiscal year 1998, 86% of the 106,988 ADA charges filed with and resolved by the Equal Employment Opportunity Commission, were either dropped or investigated and dismissed by EEOC but not without imposing opportunity costs and legal fees on employers.
Case law.
There have been some notable cases regarding the ADA. For example, two major hotel room marketers (Expedia.com and Hotels.com) with their business presence on the Internet were sued because its customers with disabilities could not reserve hotel rooms, through their websites without substantial extra efforts that persons without disabilities were not required to perform. These represent a major potential expansion of the ADA in that this, and other similar suits (known as 'bricks vs. clicks'), seeks to expand the ADA's authority to cyberspace, where entities may not have actual physical facilities that are required to comply.
'National Federation of the Blind v. Target Corporation'.
'National Federation of the Blind v. Target Corporation' was a case where a major retailer, Target Corp., was sued because their web designers failed to design its website to enable persons with low or no vision to use it.
'Board of Trustees of the University of Alabama v. Garrett'.
'Board of Trustees of the University of Alabama v. Garrett', 531 U.S. 356 (2001), was a United States Supreme Court case about Congress's enforcement powers under the Fourteenth Amendment to the Constitution. It decided that Title I of the Americans with Disabilities Act was unconstitutional insofar as it allowed private citizens to sue states for money damages.
'Barden v. The City of Sacramento'.
'Barden v. The City of Sacramento', filed in March 1999, claimed that the City of Sacramento failed to comply with the ADA when, while making public street improvements, it did not bring its sidewalks into compliance with the ADA. Certain issues were resolved in Federal Court. One issue, whether sidewalks were covered by the ADA, was appealed to the 9th Circuit Court of Appeals, which ruled that sidewalks were a 'program' under ADA and must be made accessible to persons with disabilities. The ruling was later appealed to the U.S. Supreme Court, which refused to hear the case, letting stand the ruling of the 9th Circuit Court.
'Bates v. UPS'.
'Bates v. UPS' was the first equal opportunity employment class action brought on behalf of Deaf and Hard of Hearing (D/HH) workers throughout the country concerning workplace discrimination. It established legal precedence for D/HH Employees and Customers to be fully covered under the ADA. Key finding included
The outcome was that UPS agreed to pay a $5.8 million award and agreed to a comprehensive accommodations program that was implemented in their facilities throughout the country.
'Spector v. Norwegian Cruise Line Ltd.'.
'Spector v. Norwegian Cruise Line Ltd.' was a case that was decided by the United States Supreme Court in 2005. The defendant argued that as a vessel flying the flag of a foreign nation was exempt from the requirements of the ADA. This argument was accepted by a federal court in Florida and, subsequently, the Fifth Circuit Court of Appeals. However, the U.S. Supreme Court reversed the ruling of the lower courts on the basis that Norwegian Cruise Lines was a business headquartered in the United States whose clients were predominantly Americans and, more importantly, operated out of port facilities throughout the United States.
'Olmstead v. L.C.'.
'Olmstead, Commissioner, Georgia Department of Human Resources, et al. v. L. C., by zimring, guardian ad litem and next friend, et al.' was a case before the United States Supreme Court in 1999. The two plaintiffs L.C. and E.W. were institutionalized in Georgia for diagnosed mental retardation and schizophrenia. Clinical assessments by the state determined that the plaintiffs could be appropriately treated in a community setting rather than the state institution. The plaintiffs sued the state of Georgia and the institution for being inappropriately treated and housed in the institutional setting rather than being treated in one of the state's community based treatment facilities.
The Supreme Court decided under Title II of the ADA that mental illness is a form of disability and therefore covered under the ADA, and that unjustified institutional isolation of a person with a disability is a form of discrimination because it '..perpetuates unwarranted assumptions that persons so isolated are incapable or unworthy of participating in community life.' The court added, 'Confinement in an institution severely diminishes the everyday life activities of individuals, including family relations, social contacts, work options, economic independence, educational advancement, and cultural enrichment.'
Therefore, under Title II no person with a disability can be unjustly excluded from participation in or be denied the benefits of services, programs or activities of any public entity.
'Michigan Paralyzed Veterans of America v. The University of Michigan'.
This was a case filed before The United States District Court for the Eastern District of Michigan Southern Division on behalf of the Michigan Paralyzed Veterans of America against University of Michigan – Michigan Stadium claiming that Michigan Stadium violated the Americans with Disabilities Act in its $226-million renovation by failing to add enough seats for disabled fans or accommodate the needs for disabled restrooms, concessions and parking. Additionally, the distribution of the accessible seating was at issue, with nearly all the seats being provided in the end-zone areas. The U.S. Department of Justice assisted in the suit filed by attorney Richard Bernstein of The Law Offices of Sam Bernstein in Farmington Hills, Michigan, which was settled in March 2008. The settlement required the stadium to add 329 wheelchair seats throughout the stadium by 2010, and an additional 135 accessible seats in clubhouses to go along with the existing 88 wheelchair seats. This case was significant because it set a precedent for the uniform distribution of accessible seating and gave the DOJ the opportunity to clarify previously unclear rules. The agreement now is a blueprint for all stadiums and other public facilities regarding accessibility.
'Paralyzed Veterans of America v. Ellerbe Becket Architects and Engineers'.
One of the first major ADA lawsuits, Paralyzed Veterans of America (or 'PVA') v. Ellerbe Becket Architects and Engineers, Inc., was focused on the wheelchair accessibility of a stadium project that was still in the design phase, MCI Center in Washington, D.C. Previous to this case, which was filed only five years after the ADA was passed, the DOJ was unable or unwilling to provide clarification on the distribution requirements for accessible wheelchair locations in large assembly spaces. While Section 4.33.3 of ADAAG makes reference to lines of sight, no specific reference is made to seeing over standing patrons. The MCI Center, designed by Ellerbe Becket Architects & Engineers, was designed with too few wheelchair and companion seats, and the ones that were included did not provide sight lines that would enable the wheelchair user to view the playing area while the spectators in front of them were standing. This case and another related case established precedent on seat distribution and sight lines issues for ADA enforcement that continues to present day.
'Toyota Motor Manufacturing, Kentucky, Inc. v. Williams'.
'Toyota Motor Manufacturing, Kentucky, Inc. v. Williams', 534 U.S. 184 (2002) was a case in which the Supreme Court interpreted the meaning of the phrase 'substantially impairs' as used in the Americans with Disabilities Act. It reversed a Sixth Court of Appeals decision to grant a partial summary judgment in favor of the respondent, Ella Williams that qualified her inability to perform manual job-related tasks as a disability. The Court held that the 'major life activity' definition in evaluating the performance of manual tasks focuses the inquiry on whether Williams was unable to perform a range of tasks central to most people in carrying out the activities of daily living. The issue is not whether Williams was unable to perform her specific job tasks. Therefore, the determination of whether an impairment rises to the level of a disability is not limited to activities in the workplace solely, but rather to manual tasks in life in general. When the Supreme Court applied this standard, it found that the Court of Appeals had incorrectly determined the presence of a disability because it relied solely on her inability to perform specific manual work tasks, which was insufficient in proving the presence of a disability. The Court of Appeals should have taken into account the evidence presented that Williams retained the ability to do personal tasks and household chores, such activities being the nature of tasks most people do in their daily lives, and placed too much emphasis on her job disability. Since the evidence showed that Williams was performing normal daily tasks, it ruled that the Court of Appeals erred when it found that Williams was disabled.
This ruling is now, however, no longer good law—it was invalidated by the ADAAA. In fact, Congress explicitly cited Toyota v. Williams in the text of the ADAAA itself as one of its driving influences for passing the ADAAA.
'Access Now v. Southwest Airlines'.
'Access Now v. Southwest Airlines' was a case where the District Court decided that the website of Southwest Airlines was not in violation of the Americans with Disability Act because the ADA is concerned with things with a physical existence and thus cannot be applied to cyberspace. Judge Patricia A. Seitz found that the 'virtual ticket counter' of the website was a virtual construct, and hence not a 'public place of accommodation.' As such, 'To expand the ADA to cover 'virtual' spaces would be to create new rights without well-defined standards.'
'Ouellette v. Viacom International Inc.'.
'Ouellette v. Viacom International Inc.' followed in Access Now's footsteps by holding that a mere online presence does not subject a website to the ADA guidelines. Thus Myspace and YouTube were not liable for a dyslexic man's inability to navigate the site regardless of how impressive the 'online theater' is.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1344'>
Apple I
The original Apple Computer, also known retroactively as the Apple I, or Apple-1 was released by the Apple Computer Company (now Apple Inc.) in 1976. They were designed and hand-built by Steve Wozniak. Wozniak's friend Steve Jobs had the idea of selling the computer. The Apple I was Apple's first product, and to finance its creation, Jobs sold his only means of transportation, a VW Microbus, and Wozniak sold his HP-65 calculator for $500. It was demonstrated in July 1976 at the Homebrew Computer Club in Palo Alto, California.
History.
On March 5, 1975 Steve Wozniak attended the first meeting of the Homebrew Computer Club in Gordon French's garage. He was so inspired that he immediately set to work on what would become the Apple I computer. Wozniak calculated that laying out his design would cost $1,000 and parts would cost another $20 per computer; he hoped to recoup his costs if 50 people bought his design for $40 each. His friend Steve Jobs obtained an order from a local computer store for 100 computers at $500 each. To fulfill the $50,000 order, they obtained $20,000 in parts at 30 days net and delivered the finished product in 10 days.
The Apple I went on sale in July 1976 at a price of , because Wozniak 'liked repeating digits' and because of a one-third markup on the $500 wholesale price. About 200 units were produced and all but 25 were sold during nine or ten months. Unlike other hobbyist computers of its day, which were sold as kits, the Apple I was a fully assembled circuit board containing about 60+ chips. However, to make a working computer, users still had to add a case, power supply transformers, power switch, ASCII keyboard, and composite video display. An optional board providing a cassette interface for storage was later released at the cost of $72.
The Apple I's built-in computer terminal circuitry was distinctive. All one needed was a keyboard and an inexpensive television set. Competing machines such as the Altair 8800 generally were programmed with front-mounted toggle switches and used indicator lights (red LEDs, most commonly) for output, and had to be extended with separate hardware to allow connection to a computer terminal or a teletypewriter machine. This made the Apple I an innovative machine for its day. In April 1977 the price was dropped to $475. It continued to be sold through August 1977, despite the introduction of the Apple II in April 1977, which began shipping in June of that year. Apple dropped the Apple I from its price list by October 1977, officially discontinuing it. As Wozniak was the only person who could answer most customer support questions about the computer, the company offered Apple I owners discounts and trade-ins for Apple IIs to persuade them to return their computers These recovered boards were then destroyed by Apple, contributing to their rarity today.
Collector's item.
As of 2013, at least 61 Apple I computers have been confirmed to exist. Only six have been verified to be in working condition.
Serial numbers.
Both Steve Jobs and Steve Wozniak have stated that Apple did not assign serial numbers to the Apple l. Several boards have been found with numbered stickers affixed to them which appear to be inspection stickers from the PCB manufacturer/assembler. A batch of boards is known to have numbers hand-written in black permanent marker on the back; these usually appear as '01-00##' and anecdotal evidence suggests they are inventory control numbers added by The Byte Shop to the batch Apple sold them. These Byte Shop numbers have often mistakenly been described as serial numbers by auction houses and in related press coverage.
Clones and replicas.
Several Apple I clones and replicas have been released in recent years. These are all created by hobbyists and marketed to the hobbyist/collector community. Availability is usually limited to small runs in response to demand.
External Links.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1346'>
Apatosaurus
Apatosaurus , sometimes known by the popular synonym Brontosaurus, is a genus of sauropod dinosaur that lived from about 154 to 150 million years ago, during the Jurassic Period (Kimmeridgian and early Tithonian ages). It was one of the largest land animals known to have ever existed, with an average length of and a mass of at least . Fossils of these animals have been found in Nine Mile Quarry and Bone Cabin Quarry in Wyoming and at sites in Colorado, Oklahoma and Utah, present in stratigraphic zones 2–6.
The cervical vertebrae were less elongated and more heavily constructed than those of 'Diplodocus' and the bones of the leg were much stockier (despite being longer), implying a more robust animal. The tail was held above the ground during normal locomotion. Like most sauropods, 'Apatosaurus' had only a single large claw on each forelimb, with the first three toes on the hind limb possessing claws.
Etymology.
The composite term 'Apatosaurus' comes from the Greek words 'apate' ()/'apatelos' () meaning 'deception'/'deceptive' and 'sauros' () meaning 'lizard'; thus, 'deceptive lizard'. Paleontologist Othniel Charles Marsh (1831–1899) gave it this name, because he regarded the chevron bones as similar to those of some mosasaurs, members of a group of prehistoric marine lizards. The synonym genus of 'A. excelsus', 'Brontosaurus', comes from the Greek words 'Bronto' (), meaning 'thunder', and 'sauros' () meaning 'lizard'.
Description.
'Apatosaurus' was a large, long-necked quadrupedal animal with a long, whip-like tail. Its forelimbs were slightly shorter than its hindlimbs. It was roughly the weight of four elephants. Most size estimates for 'Apatosaurus' are based on the type specimen of 'A. louisae', CM3018, which is mostly estimated at in length, mass estimates on the other hand, have been as high as for 'A. louisae' and for 'A. excelsus'. However, more recent estimates using 3D models and more complex regression equations give a range of in weight for the larger 'A. louisae'. However, recent estimates based on specimen OMNH 1670 gives considerable larger measures, with length of 26-33 meters (almost as long as 'Supersaurus') and mass of 36-80 tons.
The skull was small in comparison with the size of the animal. The jaws were lined with spatulate (chisel-like) teeth, suited to an herbivorous diet. Like those of other sauropods, the vertebrae of the neck were deeply bifurcated; that is, they carried paired spines, resulting in a wide and deep neck. The apparently massive neck was, however, filled with an extensive system of weight-saving air sacs. 'Apatosaurus', like its close relative 'Supersaurus', is notable for the incredibly tall spines on its vertebrae, which make up more than half the height of the individual bones. The shape of the tail is unusual for a diplodocid, being comparatively slender, due to the vertebral spines rapidly decreasing in height the farther they are from the hips. 'Apatosaurus' also had very long ribs compared to most other diplodocids, giving it an unusually deep chest. The limb bones were also very robust. 'Apatosaurus' had a single large claw on each forelimb, and the first three toes possessed claws on each hindlimb. The phalangeal formula is 2-1-1-1-1, meaning that the innermost finger (phalange) on the forelimb has two bones, the next has one, etc.
Classification and species.
'Apatosaurus' is a member of the family Diplodocidae, a clade of gigantic sauropod dinosaurs. The family includes some of the longest creatures ever to walk the earth, including 'Diplodocus', 'Supersaurus', and 'Barosaurus'. Within the subfamily Apatosaurinae, 'Apatosaurus' may be most closely related to 'Suuwassea', 'Supersaurus' and 'Eobrontosaurus'.
In 1877, Othniel Charles Marsh published the name of the type species 'Apatosaurus ajax'. He followed this in 1879 with a description of another, more complete specimen, which he thought represented a new genus and species, which he named 'Brontosaurus excelsus'. In 1903, Elmer Riggs re-examined the fossils. While he agreed with Marsh that 'Brontosaurus excelsus' was likely a distinct species, he also noted many similarities between 'B. excelsus' and 'A. ajax', and decided that both should be placed in the same genus. Riggs re-classified the species as 'Apatosaurus excelsus' in 1903. Since Riggs published his opinions, almost all paleontologists have agreed that the two species should be classified together in a single genus. According to the rules of the ICZN (which governs the scientific names of animals), the name 'Apatosaurus', having been published first, had priority as the official name; 'Brontosaurus' is considered a junior synonym and has therefore been discarded from formal use.
Cladogram of the Diplodocidae after Whitlock, 2011.
'Apatosaurus ajax' is the type species of the genus, and was named by the paleontologist Othniel Charles Marsh in 1877 after Ajax, the hero from Greek mythology. Two partial skeletons have been found, including part of a skull. 'Apatosaurus laticollis', named by Marsh in 1879, is now considered a synonym of it. 'Apatosaurus excelsus' (originally 'Brontosaurus') was named by Marsh in 1879. It is known from six partial skeletons, including part of a skull, which have been found in the United States, in Colorado, Oklahoma, Utah, and Wyoming. 'Apatosaurus louisae' was named by William Holland in 1916 in honor of Mrs. Louise Carnegie, wife of Andrew Carnegie who funded field research to find complete dinosaur skeletons in the American West. 'Apatosaurus louisae' is known from one partial skeleton which was found in Utah in the United States. 'Apatosaurus parvus' was originally known as 'Elosaurus parvus', but was reclassified as a species of 'Apatosaurus' in 1994. This synonymy was upheld in 2004.
'Apatosaurus yahnahpin' was named by Filla and Redman in 1994. Robert T. Bakker made 'A. yahnahpin' the type species of a new genus, 'Eobrontosaurus' in 1998, so it is now properly 'Eobrontosaurus yahnahpin'. One partial skeleton has been found in Wyoming. It has been argued that 'Eobrontosaurus' belongs within 'Camarasaurus', although this has been questioned.
History.
Othniel Charles Marsh, a Professor of Paleontology at Yale University, described and named an incomplete (and juvenile) skeleton of 'Apatosaurus ajax' in 1877. Two years later, Marsh announced the discovery of a larger and more complete specimen at Como Bluff Wyoming—which, because of discrepancies including the size difference, Marsh incorrectly identified as belonging to an entirely new genus and species. He named the new species 'Brontosaurus excelsus', meaning 'thunder lizard', from the Greek brontē/βροντη meaning 'thunder' and sauros/σαυρος meaning 'lizard', and from the Latin 'excelsus', 'highest, sublime', referring to the greater number of sacral vertebrae than in any other genus of sauropod known at the time.
The finds—the largest dinosaur ever discovered at the time and nearly complete, lacking only a head, feet, and portions of the tail—were then prepared for what was to be the first mounted display of a sauropod skeleton, at Yale's Peabody Museum of Natural History in 1905. The missing bones were created using known pieces from close relatives of 'Brontosaurus'. Sauropod feet that were discovered at the same quarry were added, as well as a tail fashioned to appear as Marsh believed it should, as well as a composite model of what he felt the skull of this massive creature might look like. This was not a delicate 'Diplodocus'-style skull (which would later turn out to be more accurate), but was composed of 'the biggest, thickest, strongest skull bones, lower jaws and tooth crowns from three different quarries', primarily those of 'Camarasaurus', the only other sauropod for which good skull material was known at the time. This method of reconstructing incomplete skeletons based on the more complete remains of related dinosaurs continues in museum mounts and life restorations to this day. In 1979, two Carnegie researchers replaced the skull on the museum's skeleton with the correct head found in a quarry in Utah in 1910.
Despite the much-publicized debut of the mounted skeleton, which cemented the name 'Brontosaurus' in the public consciousness, Elmer Riggs had published a paper in the 1903 edition of 'Geological Series of the Field Columbian Museum' that argued that 'Brontosaurus' was not different enough from 'Apatosaurus' to warrant its own genus, and created the combination 'Apatosaurus excelsus': 'In view of these facts the two genera may be regarded as synonymous. As the term 'Apatosaurus' has priority, 'Brontosaurus' will be regarded as a synonym.'
Despite this, at least one paleontologist—Robert Bakker—argued in the 1990s that 'A. ajax' and 'A. excelsus' are in fact sufficiently distinct that the latter continues to merit a separate genus. This idea has not been accepted by most palaeontologists.
Palaeobiology.
Until the 1970s, it was believed that sauropods like 'Apatosaurus' were too massive to support their own weight on dry land, so it was theorized that they must have lived partly submerged in water, perhaps in swamps. Recent findings do not support this, and sauropods are thought to have been fully terrestrial animals.
In 2008, footprints of a juvenile 'Apatosaurus' were reported from Quarry Five in Morrison, Colorado. Discovered in 2006 by Matthew Mossbrucker, these footprints show that juveniles could run on their hind legs in a manner similar to that of the modern basilisk lizard.
A study of diplodocid snouts showed that the square snout, large proportion of pits, and fine subparallel scratches in 'Apatosaurus' suggests it was a ground-height nonselective browser.
'Apatosaurus' was the second most common sauropod in the Morrison Formation ecosystem, after 'Camarasaurus'. It may have been a more solitary animal than other Morrison Formation dinosaurs. As a genus, 'Apatosaurus' existed for a long span of time, and have been found in most levels of the Morrison. Fossils of 'Apatosaurus ajax' are known exclusively from the upper portion of the formation (upper Brushy Basin Member), about 152–151 million years ago. 'A. excelsus' fossils have been reported from the upper Salt Wash Member to the upper Brushy Basin Member, ranging from the middle to late Kimmeridgian age, about 154–151 million years ago. 'A. louisae' fossils are rare, known only from one site in the upper Brushy Basin Member, dated to the late Kimmeridgian stage (about 151 million years ago). Additional 'Apatosaurus' remains are known from even younger rocks, but they have not been identified as any particular species.
Growth.
A microscopic study of 'Apatosaurus' bones concluded that the animals grew rapidly when young and reached near-adult sizes in about 10 years. Long-bone histology enables researchers to estimate the age that a specific individual reached. A study by Griebeler et al. (2013) examined long bone histological data and concluded that the Apatosaurus sp. SMA 0014 weighed , reached sexual maturity at 21 years and died at age 28. The same growth model indicated that Apatosaurus sp. BYU 601–17328 weighed , reached sexual maturity at 19 years and died at age 31.
Posture.
Diplodocids, like 'Apatosaurus', are often portrayed with their necks held high up in the air, allowing them to browse on tall trees. Some scientists have argued that the heart would have had trouble sustaining sufficient blood pressure to oxygenate the brain. Furthermore, more recent studies have shown that diplodocid necks were less flexible than previously believed, because the structure of the neck vertebrae would not have permitted the neck to bend far upwards, and that sauropods like 'Apatosaurus' were adapted to low browsing or ground feeding. However, subsequent studies demonstrated that all tetrapods appear to hold their necks at the maximum possible vertical extension when in a normal, alert posture, and argued that the same would hold true for sauropods barring any unknown, unique characteristics that set the soft tissue anatomy of their necks apart from that of other animals. 'Apatosaurus', like 'Diplodocus', would have held its neck angled upward with the head pointed downwards in a resting posture. A 2013 study found that the necks of sauropods like 'Apatosaurus' were held in a downward slope.
Physiology.
Given the large body mass of 'Apatosaurus', combined with its long neck, physiologists have encountered problems determining how these animals managed to breathe.
Beginning with the assumption that 'Apatosaurus', like crocodilians, did not have a diaphragm, the dead-space volume (the amount of unused air remaining in the mouth, trachea and air tubes after each breath) has been estimated at about 184 liters for a 30 ton specimen.
Its tidal volume (the amount of air moved in or out during a single breath) has been calculated based on the following respiratory systems:
On this basis, its respiratory system could not have been reptilian, as its tidal volume would not have been able to replace its dead-space volume. Likewise, the mammalian system would provide only a fraction of new air on each breath. Therefore, it must have had either a system unknown in the modern world or one like birds, i.e., multiple air sacs and a flow-through lung. Furthermore, an avian system would need a lung volume of only about 600 liters compared to a mammalian requirement of 2,950 liters, which would exceed the available space. The overall thoracic volume of 'Apatosaurus' has been estimated at 1,700 liters allowing for a 500-liter, four-chambered heart (like birds, not three-chambered like reptiles) and a 900-liter lung capacity. That would allow about 300 liters for the necessary tissue. Assuming 'Apatosaurus' had an avian respiratory system and a reptilian resting-metabolism, it would need to consume only about 262 liters (69 gallons) of water per day.
Tail.
An article that appeared in the November 1997 issue of 'Discover Magazine' reported research into the mechanics of 'Apatosaurus' tails by Nathan Myhrvold, a computer scientist from Microsoft. Myhrvold carried out a computer simulation of the tail, which in diplodocids like 'Apatosaurus' was a very long, tapering structure resembling a bullwhip. This computer modeling suggested that sauropods were capable of producing a whip-like cracking sound of over 200 decibels, comparable to the volume of a cannon.
Paleoecology.
Habitat.
The Morrison Formation is a sequence of shallow marine and alluvial sediments which, according to radiometric dating, ranges between 156.3 million years old (Ma) at its base, to 146.8 million years old at the top, which places it in the late Oxfordian, Kimmeridgian, and early Tithonian stages of the Late Jurassic period. This formation is interpreted as a semiarid environment with distinct wet and dry seasons. The Morrison Basin where dinosaurs lived, stretched from New Mexico to Alberta and Saskatchewan, and was formed when the precursors to the Front Range of the Rocky Mountains started pushing up to the west. The deposits from their east-facing drainage basins were carried by streams and rivers and deposited in swampy lowlands, lakes, river channels and floodplains. This formation is similar in age to the Solnhofen Limestone Formation in Germany and the Tendaguru Formation in Tanzania. In 1877 this formation became the center of the Bone Wars, a fossil-collecting rivalry between early paleontologists Othniel Charles Marsh and Edward Drinker Cope.
Paleofauna.
The Morrison Formation records an environment and time dominated by gigantic sauropod dinosaurs such as 'Camarasaurus', 'Barosaurus', 'Diplodocus', and 'Brachiosaurus'. Dinosaurs that lived alongside 'Apatosaurus' included the herbivorous ornithischians 'Camptosaurus', 'Dryosaurus', 'Stegosaurus' and 'Othnielosaurus'. Predators in this paleoenvironment included the theropods 'Saurophaganax', 'Torvosaurus', 'Ceratosaurus', 'Marshosaurus', 'Stokesosaurus', 'Ornitholestes' and 'Allosaurus', which accounting for 70 to 75% of theropod specimens and was at the top trophic level of the Morrison food web. 'Apatosaurus' is commonly found at the same sites as 'Allosaurus', 'Stegosaurus', 'Camarasaurus', and 'Diplodocus'. Other vertebrates that shared this paleoenvironment included ray-finned fishes, frogs, salamanders, turtles, sphenodonts, lizards, terrestrial and aquatic crocodylomorphans, and several species of pterosaur. Shells of bivalves and aquatic snails are also common. The flora of the period has been revealed by fossils of green algae, fungi, mosses, horsetails, cycads, ginkgoes, and several families of conifers. Vegetation varied from river-lining forests of tree ferns, and ferns (gallery forests), to fern savannas with occasional trees such as the 'Araucaria'-like conifer 'Brachyphyllum'.
In popular culture.
The length of time taken for Marsh's misclassification to be brought to public notice meant that the name 'Brontosaurus', associated as it was with one of the largest dinosaurs, became so famous that it persisted long after the name had officially been abandoned in scientific use.
'Apatosaurus' have often been depicted in cinema, beginning with Winsor McCay's 1914 classic 'Gertie the Dinosaur', one of the first animated films. McCay based his unidentified dinosaur on the 'Brontosaurus' skeleton in the American Museum of Natural History. The 1925 silent film 'The Lost World' featured a battle between a 'Brontosaurus' and an 'Allosaurus', using special effects by Willis O'Brien. These, and other early uses of the animal as major representative of the group, helped cement 'Brontosaurus' as a quintessential dinosaur in the public consciousness.
Sinclair Oil Corporation has long been a fixture of American roads (and briefly in other countries) with its green dinosaur logo and mascot, an 'Apatosaurus' ('Brontosaurus'). While Sinclair's early advertising included a number of different dinosaurs, eventually only 'Apatosaurus' was used as the official logo, due to its popular appeal.
As late as 1989, the U.S. Post Office caused controversy when it issued four 'dinosaur' stamps: 'Tyrannosaurus', 'Stegosaurus', 'Pteranodon' and 'Brontosaurus'. The use of the term 'Brontosaurus' in place of 'Apatosaurus', as well as the fact that 'Pteranodon's were technically pterosaurs and not dinosaurs, led to complaints of 'fostering scientific illiteracy.' The Post Office defended itself (in Postal Bulletin 21744) by saying, 'Although now recognized by the scientific community as 'Apatosaurus', the name 'Brontosaurus' was used for the stamp because it is more familiar to the general population.' Stephen Jay Gould also supported this position in his essay 'Bully for Brontosaurus,' though he echoed Riggs's original argument that 'Brontosaurus' is a synonym for 'Apatosaurus'. Nevertheless, he noted that the former has developed and continues to maintain an independent existence in the popular imagination.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1347'>
Allosaurus
Allosaurus is a genus of large theropod dinosaur that lived 155 to 150 million years ago during the late Jurassic period (Kimmeridgian to early Tithonian). The name 'Allosaurus' means 'different lizard'. It is derived from the Greek /'allos' ('different, other') and /'sauros' ('lizard / generic reptile'). The first fossil remains that can definitely be ascribed to this genus were described in 1877 by paleontologist Othniel Charles Marsh, and it became known as 'Antrodemus'. As one of the first well-known theropod dinosaurs, it has long attracted attention outside of paleontological circles. Indeed, it has been a top feature in several films and documentaries about prehistoric life.
'Allosaurus' was a large bipedal predator. Its skull was large and equipped with dozens of large, sharp teeth. It averaged in length, though fragmentary remains suggest it could have reached over . Relative to the large and powerful hindlimbs, its three-fingered forelimbs were small, and the body was balanced by a long and heavily muscled tail. It is classified as an allosaurid, a type of carnosaurian theropod dinosaur. The genus has a complicated taxonomy, and includes an uncertain number of valid species, the best known of which is 'A. fragilis'. The bulk of 'Allosaurus' remains have come from North America's Morrison Formation, with material also known from Portugal and possibly Tanzania. It was known for over half of the 20th century as 'Antrodemus', but study of the copious remains from the Cleveland-Lloyd Dinosaur Quarry brought the name 'Allosaurus' back to prominence, and established it as one of the best-known dinosaurs.
As the most abundant large predator in the Morrison Formation, 'Allosaurus' was at the top of the food chain, probably preying on contemporaneous large herbivorous dinosaurs and perhaps even other predators. Potential prey included ornithopods, stegosaurids, and sauropods. Some paleontologists interpret 'Allosaurus' as having had cooperative social behavior, and hunting in packs, while others believe individuals may have been aggressive toward each other, and that congregations of this genus are the result of lone individuals feeding on the same carcasses. It may have attacked large prey by ambush, using its upper jaw like a hatchet.
Description.
'Allosaurus' was a typical large theropod, having a massive skull on a short neck, a long tail and reduced forelimbs. 'Allosaurus fragilis', the best-known species, had an average length of , with the largest definitive 'Allosaurus' specimen (AMNH 680) estimated at 9.7 meters long (32 ft), and an estimated weight of 2.3 metric tons (2.5 short tons). In his 1976 monograph on 'Allosaurus', James Madsen mentioned a range of bone sizes which he interpreted to show a maximum length of . As with dinosaurs in general, weight estimates are debatable, and since 1980 have ranged between , , and for modal adult weight (not maximum). John Foster, a specialist on the Morrison Formation, suggests that is reasonable for large adults of 'A. fragilis', but that is a closer estimate for individuals represented by the average-sized thigh bones he has measured. Using the subadult specimen nicknamed 'Big Al', researchers using computer modelling arrived at a best estimate of for the individual, but by varying parameters they found a range from approximately to approximately .
Several gigantic specimens have been attributed to 'Allosaurus', but may in fact belong to other genera. The closely related genus 'Saurophaganax' (OMNH 1708) reached perhaps in length, and its single species has sometimes been included in the genus 'Allosaurus' as 'Allosaurus maximus', though recent studies support it as a separate genus. Another potential specimen of 'Allosaurus', once assigned to the genus 'Epanterias' (AMNH 5767), may have measured 12.1 meters in length (40 ft). A more recent discovery is a partial skeleton from the Peterson Quarry in Morrison rocks of New Mexico; this large allosaurid may be another individual of 'Saurophaganax'.
Skull.
The skull and teeth of 'Allosaurus' were modestly proportioned for a theropod of its size. Paleontologist Gregory S. Paul gives a length of for a skull belonging to an individual he estimates at long. Each premaxilla (the bones that formed the tip of the snout), held five teeth with D-shaped cross-sections, and each maxilla (the main tooth-bearing bones in the upper jaw) had between 14 and 17 teeth; the number of teeth does not exactly correspond to the size of the bone. Each dentary (the tooth-bearing bone of the lower jaw) had between 14 and 17 teeth, with an average count of 16. The teeth became shorter, more narrow, and more curved toward the back of the skull. All of the teeth had saw-like edges. They were shed easily, and were replaced continually, making them common fossils.
The skull had a pair of horns above and in front of the eyes. These horns were composed of extensions of the lacrimal bones, and varied in shape and size. There were also lower paired ridges running along the top edges of the nasal bones that led into the horns. The horns were probably covered in a keratin sheath and may have had a variety of functions, including acting as sunshades for the eye, being used for display, and being used in combat against other members of the same species (although they were fragile). There was a ridge along the back of the skull roof for muscle attachment, as is also seen in tyrannosaurids.
Inside the lacrimal bones were depressions that may have held glands, such as salt glands. Within the maxillae were sinuses that were better developed than those of more basal theropods such as 'Ceratosaurus' and 'Marshosaurus'; they may have been related to the sense of smell, perhaps holding something like Jacobson's organ. The roof of the braincase was thin, perhaps to improve thermoregulation for the brain. The skull and lower jaws had joints that permitted motion within these units. In the lower jaws, the bones of the front and back halves loosely articulated, permitting the jaws to bow outward and increasing the animal's gape. The braincase and frontals may also have had a joint.
Postcranial skeleton.
'Allosaurus' had nine vertebrae in the neck, 14 in the back, and five in the sacrum supporting the hips. The number of tail vertebrae is unknown and varied with individual size; James Madsen estimated about 50, while Gregory S. Paul considered that to be too many and suggested 45 or less. There were hollow spaces in the neck and anterior back vertebrae. Such spaces, which are also found in modern theropods (that is, the birds), are interpreted as having held air sacs used in respiration. The rib cage was broad, giving it a barrel chest, especially in comparison to less derived theropods like 'Ceratosaurus'. 'Allosaurus' had gastralia (belly ribs), but these are not common findings, and they may have ossified poorly. In one published case, the gastralia show evidence of injury during life. A furcula (wishbone) was also present, but has only been recognized since 1996; in some cases furculae were confused with gastralia. The ilium, the main hip bone, was massive, and the pubic bone had a prominent foot that may have been used for both muscle attachment and as a prop for resting the body on the ground. Madsen noted that in about half of the individuals from the Cleveland-Lloyd Dinosaur Quarry, independent of size, the pubes had not fused to each other at their foot ends. He suggested that this was a sexual characteristic, with females lacking fused bones to make egg-laying easier. This proposal has not attracted further attention, however.
The forelimbs of 'Allosaurus' were short in comparison to the hindlimbs (only about 35% the length of the hindlimbs in adults) and had three fingers per hand, tipped with large, strongly curved and pointed claws. The arms were powerful, and the forearm was somewhat shorter than the upper arm (1:1.2 ulna/humerus ratio). The wrist had a version of the semilunate carpal also found in more derived theropods like maniraptorans. Of the three fingers, the innermost (or thumb) was the largest, and diverged from the others. The phalangeal formula is 2-3-4-0-0, meaning that the innermost finger (phalange) has two bones, the next has three, etc. The legs were not as long or suited for speed as those of tyrannosaurids, and the claws of the toes were less developed and more hoof-like than those of earlier theropods. Each foot had three weight-bearing toes and an inner dewclaw, which Madsen suggested could have been used for grasping in juveniles. There was also what is interpreted as the splint-like remnant of a fifth (outermost) metatarsal, perhaps used as a lever between the Achilles tendon and foot.
Classification.
'Allosaurus' was an allosaurid, a member of a family of large theropods within the larger group Carnosauria. The family name Allosauridae was created for this genus in 1878 by Othniel Charles Marsh, but the term was largely unused until the 1970s in favor of Megalosauridae, another family of large theropods that eventually became a wastebasket taxon. This, along with the use of 'Antrodemus' for 'Allosaurus' during the same period, is a point that needs to be remembered when searching for information on 'Allosaurus' in publications that predate James Madsen's 1976 monograph. Major publications using the name 'Megalosauridae' instead of 'Allosauridae' include Gilmore, 1920, von Huene, 1926, Romer, 1956 and 1966, Steel, 1970, and Walker, 1964.
Following the publication of Madsen's influential monograph, Allosauridae became the preferred family assignment, but it too was not strongly defined. Semi-technical works used Allosauridae for a variety of large theropods, usually those that were larger and better-known than megalosaurids. Typical theropods that were thought to be related to 'Allosaurus' included 'Indosaurus', 'Piatnitzkysaurus', 'Piveteausaurus', 'Yangchuanosaurus', 'Acrocanthosaurus', 'Chilantaisaurus', 'Compsosuchus', 'Stokesosaurus', and 'Szechuanosaurus'. Given modern knowledge of theropod diversity and the advent of cladistic study of evolutionary relationships, none of these theropods is now recognized as an allosaurid, although several, like 'Acrocanthosaurus' and 'Yangchuanosaurus', are members of closely related families.
Below is a cladogram by Benson 'et al.' in 2010.
Allosauridae is one of four families in Carnosauria; the other three are Neovenatoridae, Carcharodontosauridae and Sinraptoridae. Allosauridae has at times been proposed as ancestral to the Tyrannosauridae (which would make it paraphyletic), one recent example being Gregory S. Paul's 'Predatory Dinosaurs of the World', but this has been rejected, with tyrannosaurids identified as members of a separate branch of theropods, the Coelurosauria. Allosauridae is the smallest of the carnosaur families, with only 'Saurophaganax' and a currently unnamed French allosauroid accepted as possible valid genera besides 'Allosaurus' in the most recent review. Another genus, 'Epanterias', is a potential valid member, but it and 'Saurophaganax' may turn out to be large examples of 'Allosaurus'. Recent reviews have kept the genus 'Saurophaganax' and included 'Epanterias' with 'Allosaurus'.
Discovery and history.
Early discoveries and research.
The discovery and early study of 'Allosaurus' is complicated by the multiplicity of names coined during the Bone Wars of the late 19th century. The first described fossil in this history was a bone obtained secondhand by Ferdinand Vandiveer Hayden in 1869. It came from Middle Park, near Granby, Colorado, probably from Morrison Formation rocks. The locals had identified such bones as 'petrified horse hoofs'. Hayden sent his specimen to Joseph Leidy, who identified it as half of a tail vertebra, and tentatively assigned it to the European dinosaur genus 'Poekilopleuron' as 'Poicilopleuron' 'valens'. He later decided it deserved its own genus, 'Antrodemus'.
'Allosaurus' itself is based on YPM 1930, a small collection of fragmentary bones including parts of three vertebrae, a rib fragment, a tooth, a toe bone, and, most useful for later discussions, the shaft of the right humerus (upper arm). Othniel Charles Marsh gave these remains the formal name 'Allosaurus fragilis' in 1877. 'Allosaurus' comes from the Greek 'allos/αλλος', meaning 'strange' or 'different' and 'sauros/σαυρος', meaning 'lizard' or 'reptile'. It was named 'different lizard' because its vertebrae were different from those of other dinosaurs known at the time of its discovery. The species epithet 'fragilis' is Latin for 'fragile', referring to lightening features in the vertebrae. The bones were collected from the Morrison Formation of Garden Park, north of Cañon City. Marsh and Edward Drinker Cope, who were in scientific competition, went on to coin several other genera based on similarly sparse material that would later figure in the taxonomy of 'Allosaurus'. These include Marsh's 'Creosaurus' and 'Labrosaurus', and Cope's 'Epanterias'.
In their haste, Cope and Marsh did not always follow up on their discoveries (or, more commonly, those made by their subordinates). For example, after the discovery by Benjamin Mudge of the type specimen of 'Allosaurus' in Colorado, Marsh elected to concentrate work in Wyoming; when work resumed at Garden Park in 1883, M. P. Felch found an almost complete 'Allosaurus' and several partial skeletons. In addition, one of Cope's collectors, H. F. Hubbell, found a specimen in the Como Bluff area of Wyoming in 1879, but apparently did not mention its completeness, and Cope never unpacked it. Upon unpacking in 1903 (several years after Cope had died), it was found to be one of the most complete theropod specimens then known, and in 1908 the skeleton, now cataloged as AMNH 5753, was put on public view. This is the well-known mount poised over a partial 'Apatosaurus' skeleton as if scavenging it, illustrated as such by Charles R. Knight. Although notable as the first free-standing mount of a theropod dinosaur, and often illustrated and photographed, it has never been scientifically described.
The multiplicity of early names complicated later research, with the situation compounded by the terse descriptions provided by Marsh and Cope. Even at the time, authors such as Samuel Wendell Williston suggested that too many names had been coined. For example, Williston pointed out in 1901 that Marsh had never been able to adequately distinguish 'Allosaurus' from 'Creosaurus'. The most influential early attempt to sort out the convoluted situation was produced by Charles W. Gilmore in 1920. He came to the conclusion that the tail vertebra named 'Antrodemus' by Leidy was indistinguishable from those of 'Allosaurus', and 'Antrodemus' thus should be the preferred name because as the older name it had priority. 'Antrodemus' became the accepted name for this familiar genus for over fifty years, until James Madsen published on the Cleveland-Lloyd specimens and concluded that 'Allosaurus' should be used because 'Antrodemus' was based on material with poor, if any, diagnostic features and locality information (for example, the geological formation that the single bone of 'Antrodemus' came from is unknown). 'Antrodemus' has been used informally for convenience when distinguishing between the skull Gilmore restored and the composite skull restored by Madsen.
Cleveland-Lloyd discoveries.
Although sporadic work at what became known as the Cleveland-Lloyd Dinosaur Quarry in Emery County, Utah had taken place as early as 1927, and the fossil site itself described by William J. Stokes in 1945, major operations did not begin there until 1960. Under a cooperative effort involving nearly 40 institutions, thousands of bones were recovered between 1960 and 1965. The quarry is notable for the predominance of 'Allosaurus' remains, the condition of the specimens, and the lack of scientific resolution on how it came to be. The majority of bones belong to the large theropod 'Allosaurus fragilis' (it is estimated that the remains of at least 46 'A. fragilis' have been found there, out of at minimum 73 dinosaurs), and the fossils found there are disarticulated and well-mixed. Nearly a dozen scientific papers have been written on the taphonomy of the site, suggesting numerous mutually exclusive explanations for how it may have formed. Suggestions have ranged from animals getting stuck in a bog, to becoming trapped in deep mud, to falling victim to drought-induced mortality around a waterhole, to getting trapped in a spring-fed pond or seep. Regardless of the actual cause, the great quantity of well-preserved 'Allosaurus' remains has allowed this genus to be known in detail, making it among the best-known theropods. Skeletal remains from the quarry pertain to individuals of almost all ages and sizes, from less than to long, and the disarticulation is an advantage for describing bones usually found fused.
Recent work: 1980s–present.
The period since Madsen's monograph has been marked by a great expansion in studies dealing with topics concerning 'Allosaurus' in life (paleobiological and paleoecological topics). Such studies have covered topics including skeletal variation, growth, skull construction, hunting methods, the brain, and the possibility of gregarious living and parental care. Reanalysis of old material (particularly of large 'allosaur' specimens), new discoveries in Portugal, and several very complete new specimens have also contributed to the growing knowledge base.
'Big Al' and 'Big Al Two'.
One of the more significant 'Allosaurus' finds was the 1991 discovery of 'Big Al' (MOR 693), a 95% complete, partially articulated specimen that measured about 8 meters (about 26 ft) in length. MOR 693 was excavated near Shell, Wyoming, by a joint Museum of the Rockies and University of Wyoming Geological Museum team. This skeleton was discovered by a Swiss team, led by Kirby Siber. In 1996 the same team discovered a second 'Allosaurus', 'Big Al Two', which is the best preserved skeleton of its kind to date.
The completeness, preservation, and scientific importance of this skeleton gave 'Big Al' its name; the individual itself was below the average size for 'Allosaurus fragilis', and was a subadult estimated at only 87% grown. The specimen was described by Breithaupt in 1996. 19 of its bones were broken or showed signs of infection, which may have contributed to 'Big Al's' death. Pathologic bones included five ribs, five vertebrae, and four bones of the feet; several damaged bones showed osteomyelitis, a bone infection. A particular problem for the living animal was infection and trauma to the right foot that probably affected movement and may have also predisposed the other foot to injury because of a change in gait. Al had an infection on the first phalanx on the third toe that was afflicted by an involucrum. The infection was long lived, perhaps up to 6 months.
Species and taxonomy.
It is unclear how many species of 'Allosaurus' there were. Seven species have been considered potentially valid since 1988 ('A. amplexus', 'A. atrox', 'A. europaeus', the type species 'A. fragilis', the as-yet not formally described 'A. jimmadseni', 'A. maximus', and 'A. tendagurensis'), although only a fraction are usually considered valid at any given time. Additionally, there are at least ten dubious or undescribed species that have been assigned to 'Allosaurus' over the years, along with the species belonging to genera now sunk into 'Allosaurus'. In a recent review of basal tetanuran theropods, only 'A. fragilis' (including 'A. amplexus' and 'A. atrox' as synonyms), 'A. jimmadseni' (as an unnamed species), and 'A. tendagurensis' were accepted as potentially valid species, with 'A. europaeus' not yet proposed and 'A. maximus' assigned to 'Saurophaganax'.
'A. amplexus', 'A. atrox', 'A. fragilis', 'A. jimmadseni', and 'A. maximus' are all known from remains discovered in the Kimmeridgian–Tithonian Upper Jurassic-age Morrison Formation of the United States, spread across the states of Colorado, Montana, New Mexico, Oklahoma, South Dakota, Utah, and Wyoming. 'A. fragilis' is regarded as the most common, known from the remains of at least sixty individuals. Debate has gone on since the 1980s regarding the possibility that there are two common Morrison Formation species of 'Allosaurus', with the second known as 'A. atrox'; recent work has followed a 'one species' interpretation, with the differences seen in the Morrison Formation material attributed to individual variation. A study of skull elements from the Cleveland-Lloyd site found wide variation between individuals, calling into question previous species-level distinctions based such features as the shape of the lacrimal horns, and the proposed differentiation of 'A. jimmadseni' based on the shape of the jugal. 'A. europaeus' was found in the Kimmeridgian-age Porto Novo Member of the Lourinhã Formation, but may be the same as 'A. fragilis'. 'A. tendagurensis' was found in Kimmeridgian-age rocks of Tendaguru, in Mtwara, Tanzania. It may be a more basal tetanuran, a carcharodontosaurid, or simply a dubious theropod. Although obscure, it was a large theropod, possibly around 10 meters long (33 ft) and 2.5 metric tons (2.8 short tons) in weight.
'Allosaurus' is regarded as a probable synonym of the genera 'Antrodemus', 'Creosaurus', 'Epanterias', and 'Labrosaurus'. Most of the species that are regarded as synonyms of 'A. fragilis', or that were misassigned to the genus, are obscure and were based on scrappy remains. One exception is 'Labrosaurus ferox', named in 1884 by Marsh for an oddly formed partial lower jaw, with a prominent gap in the tooth row at the tip of the jaw, and a rear section greatly expanded and turned down. Later researchers suggested that the bone was pathologic, showing an injury to the living animal, and that part of the unusual form of the rear of the bone was due to plaster reconstruction. It is now regarded as an example of 'A. fragilis.' Other remains thought to pertain to 'Allosaurus' have come from across the world, including Australia, Siberia, and Switzerland, but these fossils have been reassessed as belonging to other dinosaurs.
The issue of synonyms is complicated by the type specimen of 'Allosaurus fragilis' (catalog number YPM 1930) being extremely fragmentary, consisting of a few incomplete vertebrae, limb bone fragments, rib fragments, and a tooth. Because of this, several scientists have interpreted the type specimen as potentially dubious, and thus the genus 'Allosaurus' itself or at least the species 'A. fragilis' would be a 'nomen dubium' ('dubious name', based on a specimen too incomplete to compare to other specimens or to classify). To address this situation, Gregory S. Paul and Kenneth Carpenter (2010) submitted a petition to the ICZN to have the name 'A. fragilis' officially transferred to the more complete specimen USNM4734 (as a neotype). This request is currently pending review.
Paleoecology.
'Allosaurus' was the most common large theropod in the vast tract of Western American fossil-bearing rock known as the Morrison Formation, accounting for 70 to 75% of theropod specimens, and as such was at the top trophic level of the Morrison food web. The Morrison Formation is interpreted as a semiarid environment with distinct wet and dry seasons, and flat floodplains. Vegetation varied from river-lining forests of conifers, tree ferns, and ferns (gallery forests), to fern savannas with occasional trees such as the 'Araucaria'-like conifer 'Brachyphyllum'.
The Morrison Formation has been a rich fossil hunting ground. The flora of the period has been revealed by fossils of green algae, fungi, mosses, horsetails, ferns, cycads, ginkgoes, and several families of conifers. Animal fossils discovered include bivalves, snails, ray-finned fishes, frogs, salamanders, turtles, sphenodonts, lizards, terrestrial and aquatic crocodylomorphans, several species of pterosaur, numerous dinosaur species, and early mammals such as docodonts, multituberculates, symmetrodonts, and triconodonts. Dinosaurs known from the Morrison include the theropods 'Ceratosaurus', 'Ornitholestes', and 'Torvosaurus', the sauropods 'Apatosaurus', 'Brachiosaurus', 'Camarasaurus', and 'Diplodocus', and the ornithischians 'Camptosaurus', 'Dryosaurus', and 'Stegosaurus'. 'Allosaurus' is commonly found at the same sites as 'Apatosaurus', 'Camarasaurus', 'Diplodocus', and 'Stegosaurus'. The Late Jurassic formations of Portugal where 'Allosaurus' is present are interpreted as having been similar to the Morrison but with a stronger marine influence. Many of the dinosaurs of the Morrison Formation are the same genera as those seen in Portuguese rocks (mainly 'Allosaurus', 'Ceratosaurus', 'Torvosaurus', and 'Apatosaurus'), or have a close counterpart ('Brachiosaurus' and 'Lusotitan', 'Camptosaurus' and 'Draconyx').
'Allosaurus' coexisted with fellow large theropods 'Ceratosaurus' and 'Torvosaurus' in both the United States and Portugal. The three appear to have had different ecological niches, based on anatomy and the location of fossils. Ceratosaurs and torvosaurs may have preferred to be active around waterways, and had lower, thinner bodies that would have given them an advantage in forest and underbrush terrains, whereas allosaurs were more compact, with longer legs, faster but less maneuverable, and seem to have preferred dry floodplains. 'Ceratosaurus', better known than 'Torvosaurus', differed noticeably from 'Allosaurus' in functional anatomy by having a taller, narrower skull with large, broad teeth. 'Allosaurus' was itself a potential food item to other carnivores, as illustrated by an 'Allosaurus' pubic foot marked by the teeth of another theropod, probably 'Ceratosaurus' or 'Torvosaurus'. The location of the bone in the body (along the bottom margin of the torso and partially shielded by the legs), and the fact that it was among the most massive in the skeleton, indicates that the 'Allosaurus' was being scavenged.
Paleobiology.
Life history.
The wealth of 'Allosaurus' fossils, from nearly all ages of individuals, allows scientists to study how the animal grew and how long its lifespan may have been. Remains may reach as far back in the lifespan as eggs—crushed eggs from Colorado have been suggested as those of 'Allosaurus'. Based on histological analysis of limb bones, bone deposition appears to stop at around 22 to 28 years, which is comparable to that of other large theropods like 'Tyrannosaurus'. From the same analysis, its maximum growth appears to have been at age 15, with an estimated growth rate of about 150 kilograms (330 lb) per year.
Medullary bone tissue (endosteally derived, ephemeral, mineralization located inside the medulla of the long bones in gravid female birds) has been reported in at least one 'Allosaurus' specimen, a shin bone from the Cleveland-Lloyd Quarry. Today, this bone tissue is only formed in female birds that are laying eggs, as it is used to supply calcium to shells. Its presence in the 'Allosaurus' individual has been used to establish sex and show it had reached reproductive age. However, other studies have called into question some cases of medullary bone in dinosaurs, including this 'Allosaurus' individual. Data from extant birds suggested that the medullary bone in this 'Allosaurus' individual may have been the result of a bone pathology instead.
The discovery of a juvenile specimen with a nearly complete hindlimb shows that the legs were relatively longer in juveniles, and the lower segments of the leg (shin and foot) were relatively longer than the thigh. These differences suggest that younger 'Allosaurus' were faster and had different hunting strategies than adults, perhaps chasing small prey as juveniles, then becoming ambush hunters of large prey upon adulthood. The thigh bone became thicker and wider during growth, and the cross-section less circular, as muscle attachments shifted, muscles became shorter, and the growth of the leg slowed. These changes imply that juvenile legs has less predictable stresses compared with adults, which would have moved with more regular forward progression. Conversely, the skull bones appear to have generally grown isometrically, increasing in size without changing in proportion.
Feeding.
Paleontologists accept 'Allosaurus' as an active predator of large animals. There is dramatic evidence for allosaur attacks on 'Stegosaurus', including an 'Allosaurus' tail vertebra with a partially healed puncture wound that fits a 'Stegosaurus' tail spike, and a 'Stegosaurus' neck plate with a U-shaped wound that correlates well with an 'Allosaurus' snout. Sauropods seem to be likely candidates as both live prey and as objects of scavenging, based on the presence of scrapings on sauropod bones fitting allosaur teeth well and the presence of shed allosaur teeth with sauropod bones. However, as Gregory Paul noted in 1988, 'Allosaurus' was probably not a predator of fully grown sauropods, unless it hunted in packs, as it had a modestly sized skull and relatively small teeth, and was greatly outweighed by contemporaneous sauropods. Another possibility is that it preferred to hunt juveniles instead of fully grown adults. Research in the 1990s and first decade of the 21st century may have found other solutions to this question. Robert T. Bakker, comparing 'Allosaurus' to Cenozoic sabre-toothed carnivorous mammals, found similar adaptations, such as a reduction of jaw muscles and increase in neck muscles, and the ability to open the jaws extremely wide. Although 'Allosaurus' did not have sabre teeth, Bakker suggested another mode of attack that would have used such neck and jaw adaptations: the short teeth in effect became small serrations on a saw-like cutting edge running the length of the upper jaw, which would have been driven into prey. This type of jaw would permit slashing attacks against much larger prey, with the goal of weakening the victim.
Similar conclusions were drawn by another study using finite element analysis on an 'Allosaurus' skull. According to their biomechanical analysis, the skull was very strong but had a relatively small bite force. By using jaw muscles only, it could produce a bite force of 805 to 2,148 N, less than the values for alligators (13,000 N), lions (4,167 N), and leopards (2,268 N), but the skull could withstand nearly 55,500 N of vertical force against the tooth row. The authors suggested that 'Allosaurus' used its skull like a hatchet against prey, attacking open-mouthed, slashing flesh with its teeth, and tearing it away without splintering bones, unlike 'Tyrannosaurus', which is thought to have been capable of damaging bones. They also suggested that the architecture of the skull could have permitted the use of different strategies against different prey; the skull was light enough to allow attacks on smaller and more agile ornithopods, but strong enough for high-impact ambush attacks against larger prey like stegosaurids and sauropods. Their interpretations were challenged by other researchers, who found no modern analogues to a hatchet attack and considered it more likely that the skull was strong to compensate for its open construction when absorbing the stresses from struggling prey. The original authors noted that 'Allosaurus' itself has no modern equivalent, that the tooth row is well-suited to such an attack, and that articulations in the skull cited by their detractors as problematic actually helped protect the palate and lessen stress. Another possibility for handling large prey is that theropods like 'Allosaurus' were 'flesh grazers' which could take bites of flesh out of living sauropods that were sufficient to sustain the predator so it would not have needed to expend the effort to kill the prey outright. This strategy would also potentially have allowed the prey to recover and be fed upon in a similar way later. An additional suggestion notes that ornithopods were the most common available dinosaurian prey, and that allosaurs may have subdued them by using an attack similar to that of modern big cats: grasping the prey with their forelimbs, and then making multiple bites on the throat to crush the trachea. This is compatible with other evidence that the forelimbs were strong and capable of restraining prey.
A biomechanical study published in 2013 by Eric Snively and colleagues found that 'Allosaurus' had an unusually low attachment point on the skull for the longissimus capitis superficialis neck muscle compared to other theropods such as 'Tyrannosaurus'. This would have allowed the animal to make rapid and forceful vertical movements with the skull. The authors found that vertical strikes as proposed by Bakker and Rayfield are consistent with the animal's capabilities. They also found that the animal probably processed carcasses by vertical movements in a similar manner to falcons, such as kestrels: the animal could have gripped prey with the skull and feet, then pulled back and up to remove flesh. This differs from the prey-handling envisioned for tyrannosaurids, which probably tore flesh with lateral shakes of the skull, similar to crocodilians. In addition, 'Allosaurus' was able to 'move its head and neck around relatively rapidly and with considerable control', at the cost of power.
Other aspects of feeding include the eyes, arms, and legs. The shape of the skull of 'Allosaurus' limited potential binocular vision to 20° of width, slightly less than that of modern crocodilians. As with crocodilians, this may have been enough to judge prey distance and time attacks. The arms, compared with those of other theropods, were suited for both grasping prey at a distance or clutching it close, and the articulation of the claws suggests that they could have been used to hook things. Finally, the top speed of 'Allosaurus' has been estimated at 30 to 55 kilometers per hour (19 to 34 miles per hour).
Social behavior.
It has been speculated since the 1970s that 'Allosaurus' preyed on sauropods and other large dinosaurs by hunting in groups.
Such a depiction is common in semitechnical and popular dinosaur literature. Robert T. Bakker has extended social behavior to parental care, and has interpreted shed allosaur teeth and chewed bones of large prey animals as evidence that adult allosaurs brought food to lairs for their young to eat until they were grown, and prevented other carnivores from scavenging on the food. However, there is actually little evidence of gregarious behavior in theropods, and social interactions with members of the same species would have included antagonistic encounters, as shown by injuries to gastralia and bite wounds to skulls (the pathologic lower jaw named 'Labrosaurus ferox' is one such possible example). Such head-biting may have been a way to establish dominance in a pack or to settle territorial disputes.
Although 'Allosaurus' may have hunted in packs, it has been argued that 'Allosaurus' and other theropods had largely aggressive interactions instead of cooperative interactions with other members of their own species. The study in question noted that cooperative hunting of prey much larger than an individual predator, as is commonly inferred for theropod dinosaurs, is rare among vertebrates in general, and modern diapsid carnivores (including lizards, crocodiles, and birds) very rarely cooperate to hunt in such a way. Instead, they are typically territorial and will kill and cannibalize intruders of the same species, and will also do the same to smaller individuals that attempt to eat before they do when aggregated at feeding sites. According to this interpretation, the accumulation of remains of multiple 'Allosaurus' individuals at the same site, e.g. in the Cleveland–Lloyd Quarry, are not due to pack hunting, but to the fact that 'Allosaurus' individuals were drawn together to feed on other disabled or dead allosaurs, and were sometimes killed in the process. This could explain the high proportion of juvenile and subadult allosaurs present, as juveniles and subadults are disproportionally killed at modern group feeding sites of animals like crocodiles and Komodo dragons. The same interpretation applies to Bakker's lair sites. There is some evidence for cannibalism in 'Allosaurus', including 'Allosaurus' shed teeth found among rib fragments, possible tooth marks on a shoulder blade, and cannibalized allosaur skeletons among the bones at Bakker's lair sites.
Brain and senses.
The brain of 'Allosaurus', as interpreted from spiral CT scanning of an endocast, was more consistent with crocodilian brains than those of the other living archosaurs, birds. The structure of the vestibular apparatus indicates that the skull was held nearly horizontal, as opposed to strongly tipped up or down. The structure of the inner ear was like that of a crocodilian, and so 'Allosaurus' probably could have heard lower frequencies best, and would have had trouble with subtle sounds. The olfactory bulbs were large and seem to have been well suited for detecting odors, although the area for evaluating smells was relatively small.
Paleopathology.
In 2001, Bruce Rothschild and others published a study examining evidence for stress fractures and tendon avulsions in theropod dinosaurs and the implications for their behavior. Since stress fractures are caused by repeated trauma rather than singular events they are more likely to be caused by the behavior of the animal than other kinds of injury. Stress fractures and tendon avulsions occurring in the forelimb have special behavioral significance since while injuries to the feet could be caused by running or migration, resistant prey items are the most probable source of injuries to the hand. 'Allosaurus' was one of only two theropods examined in the study to exhibit a tendon avulsion, and in both cases the avulsion occurred on the forelimb. When the researchers looked for stress fractures, they found that 'Allosaurus' had a significantly greater number of stress fractures than 'Albertosaurus', 'Ornithomimus' or 'Archaeornithomimus'. Of the 47 hand bones the researchers studied, 3 were found to contain stress fractures. Of the feet, 281 bones were studied and 17 found to have stress fractures. The stress fractures in the foot bones 'were distributed to the proximal phalanges' and occurred across all three weight-bearing toes in 'statistically indistinguishable' numbers. Since the lower end of the third metatarsal would have contacted the ground first while an allosaur was running it would have borne the most stress. If the allosaurs' stress fractures were caused by damage accumulating while walking or running this bone should have experience more stress fractures than the others. The lack of such a bias in the examined 'Allosaurus' fossils indicates an origin for the stress fractures from a source other than running. The authors conclude that these fractures occurred during interaction with prey, like an allosaur trying to hold struggling prey with its feet. The abundance of stress fractures and avulsion injuries in 'Allosaurus' provide evidence for 'very active' predation-based rather than scavenging diets.
The left scapula and fibula of an 'Allosaurus fragilis' specimen catalogued as USNM 4734 are both pathological, both probably due to healed fractures. The specimen USNM 8367 preserved several pathological gastralia which preserve evidence of healed fractures near their middle. Some of the fractures were poorly healed and 'formed pseudoarthroses.' A specimen with a fractured rib was recovered from the Cleveland-Lloyd Quarry. Another specimen had fractured ribs and fused vertebrae near the end of the tail. An apparent subadult male 'Allosaurus fragilis' was reported to have extensive pathologies, with a total of fourteen separate injuries. The specimen MOR 693 had pathologies on five ribs, the sixth neck vertebra the third eighth and thirteenth back vertebrae, the second tail vertebra and its chevron, the gastralia right scapula, manual phalanx I left ilium metatarsals III and V, the first phalanx of the third toe and the third phalanx of the second. The ilium had 'a large hole.. caused by a blow from above'.The near end of the first phalanx of the third toe was afflicted by an involucrum.
Other pathologies reported in 'Allosaurus' include:
In popular culture.
Along with 'Tyrannosaurus', 'Allosaurus' has come to represent the quintessential large, carnivorous dinosaur in popular culture. It is a common dinosaur in museums, due in particular to the excavations at the Cleveland-Lloyd Dinosaur Quarry; by 1976, as a result of cooperative operations, 38 museums in eight countries on three continents had Cleveland-Lloyd allosaur material or casts. 'Allosaurus' is the official state fossil of Utah.
'Allosaurus' has been depicted in popular culture since the early years of the 20th century. It is top predator in both Arthur Conan Doyle's 1912 novel, 'The Lost World', and its 1925 film adaptation, the first full-length motion picture to feature dinosaurs. 'Allosaurus' was used as the starring dinosaur of the 1956 film 'The Beast of Hollow Mountain', and the 1969 film 'The Valley of Gwangi', two genre combinations of living dinosaurs with Westerns. In 'The Valley of Gwangi', Gwangi is billed as an 'Allosaurus', although Ray Harryhausen based his model for the creature on Charles R. Knight's depiction of a 'Tyrannosaurus'. Harryhausen sometimes confuses the two, stating in a DVD interview 'They're both meat eaters, they're both tyrants.. one was just a bit larger than the other.' 'Allosaurus' appeared in the second episode of the 1999 BBC television series 'Walking with Dinosaurs' and the follow-up special 'The Ballad of Big Al', which speculated on the life of the 'Big Al' specimen, based on scientific evidence from the numerous injuries and pathologies in its skeleton. 'Allosaurus' also made an appearance in the Discovery Channel series 'Dinosaur Revolution'. Its depiction in this series was based upon a specimen with a smashed lower jaw that was uncovered by paleontologist Thomas Holtz.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1348'>
AK-47
The AK-47 is a selective-fire, gas-operated 7.62×39mm assault rifle, first developed in the Soviet Union by Mikhail Kalashnikov. It is officially known in the Soviet documentation as 'Avtomat Kalashnikova' (). It is also known as Kalashnikov, AK, or in Russian slang, Kalash.
Design work on the AK-47 began in the last year of World War II (1945). After the war in 1946, the AK-47 was presented for official military trials. In 1948, the fixed-stock version was introduced into active service with selected units of the Soviet Army. An early development of the design was the AKS (S—'Skladnoy' or 'folding'), which was equipped with an underfolding metal shoulder stock. In 1949, the AK-47 was officially accepted by the Soviet Armed Forces and used by the majority of the member states of the Warsaw Pact.
Even after six decades the model and its variants remain the most popular and widely used assault rifles in the world because of their substantial reliability even under harsh conditions, low production costs compared to contemporary Western weapons, availability in virtually every geographic region and ease of use. The AK-47 has been manufactured in many countries and has seen service with armed forces as well as irregular forces worldwide, and was the basis for developing many other types of individual and crew-served firearms. More AK-type rifles have been produced than all other assault rifles combined.
History.
Pre-history.
During World War II, the Germans introduced the StG 44 ('Sturmgewehr,' literally 'Storm rifle') in large numbers—about half a million were built. This gun, from which the English terminology 'assault rifle' originates, was chambered in a new intermediate cartridge, the 7.92×33mm Kurz. The Soviets captured an early prototype of the StG 44, a Mkb 42(H), and they were also given samples of the U.S. M1 Carbine, which was also developed for a less powerful round. Based on these developments, on 15 July 1943, the People's Commissariat for Armaments decided to introduce a Soviet intermediate cartridge. A team led by N.M. Elizarov (Н.М. Елизаров) was charged with the development of what eventually became the 7.62×39mm M43; the new cartridge went into mass production in March 1944. At the same meeting that adopted the new cartridge, the Soviet planners decided that a whole range of new small arms should use it, including a semi-automatic carbine, a fully automatic rifle, and a light machine gun. Design contests for these new weapons began in earnest in 1944.
Development and competition.
Mikhail Kalashnikov began his career as a weapon designer while in a hospital after he was shot in the shoulder during the Battle of Bryansk. After tinkering with a submachine gun design in 1942 and with a light machine gun in 1943, in 1944 he entered a competition for a new weapon that would chamber the 7.62×41mm cartridge developed by Yelizarov and Syomin in 1943 (the 7.62×41mm cartridge predated the current 7.62×39mm M1943). In the 1944 competition for intermediate cartridge weapons, Kalashnikov submitted a semi-automatic, gas-operated carbine, strongly influenced by the American M1 Garand, but that lost out to a Simonov design, which was adopted as the SKS-45.
In the fully automatic weapon category, the specifications (тактико-технические требования - TTT) number 2456-43 passed down by the GAU in November 1943 were rather ambitious: the weapon was to have a 500–520 mm long barrel and had to weigh no more than 5 kg, including a folding bipod. Despite this, many Soviet designers participated in this category, Tokarev, Korovin, Degtyarev, Shpagin, Simonov, and Prilutsky are some of the more prominent names who submitted designs; Kalashnikov did not submit an entry for this contest. A gun presented by Sudayev, the AS-44 (weight: 5.6 kg, barrel length 505 mm), came up ahead in the mid-1944 trials.
However subsequent field trials conducted in 1945 found it to be too heavy for the average soldier and Sudayev was asked to lighten his gun; his lightened variant (5.35 kg, 485 mm barrel) turned out to be less reliable and less accurate. In October 1945, the GAU was convinced to dispense with the built-in bipod requirement; Sudayev's gun in this variant, called OAS (облегченный автомат Судаева - ОАС), weighed only 4.8 kg. Sudayev however fell ill and died in 1946, preventing further development.
The experience gained from the reliability issues of the lightened Sudayev design convinced the GAU that a brand new competition had to be held, and for this round the requirements were explicitly stated: a wholesale replacement of the PPSh-41 and PPS-43 sub-machine guns was what they were after. The new competition was initiated in 1946 under GAU TTT number 3131-45. Ten designs had been submitted by August 1946.
Kalashnikov and his design team from factory number two in Kovrov submitted an entry. It was a gas-operated rifle which had a breech-block mechanism similar to his 1944 carbine, and a curved 30-round magazine. Kalashnikov's rifles (codenamed AK-1 and −2, the former with a milled receiver and the latter with a stamped one) proved to be reliable and the weapon was accepted to second round of competition along with designs by A. A. Dementyev (KB-P-520) and A. A. Bulkin (TKB-415). In late 1946, as the rifles were being tested, one of Kalashnikov's assistants, Aleksandr Zaitsev, suggested a major redesign of AK-1, particularly to improve reliability. At first, Kalashnikov was reluctant, given that their rifle had already fared better than its competitors. Eventually, however, Zaitsev managed to persuade Kalashnikov. The new rifle (factory name KB-P-580) proved to be simple and reliable under a wide range of conditions with convenient handling characteristics; prototypes with serial numbers one to three were completed in November 1947. Production of the first army trial series began in early 1948 at the Izhevsk factory number 524, and in 1949 it was adopted by the Soviet Army as '7.62 mm Kalashnikov assault rifle (AK)'.
Design.
The AK-47 is best described as a hybrid of previous rifle technology innovations: the trigger mechanism, double locking lugs and unlocking raceway of the M1 Garand/M1 carbine, the safety mechanism of the John Browning designed Remington Model 8 rifle, and the gas system of the Sturmgewehr 44.
Kalashnikov's team had access to all of these weapons and had no need to 'reinvent the wheel', though he denied that his design was based on the German Sturmgewehr 44 assault rifle. Kalashnikov himself observed: 'A lot of Russian Army soldiers ask me how one can become a constructor, and how new weaponry is designed. These are very difficult questions. Each designer seems to have his own paths, his own successes and failures. But one thing is clear: before attempting to create something new, it is vital to have a good appreciation of everything that already exists in this field. I myself have had many experiences confirming this to be so.'
There are claims about Kalashnikov copying other designs, like Bulkin's TKB-415 or Simonov's AVS-31.
Receiver development.
There were many difficulties during the initial phase of production. The first production models had stamped sheet metal receivers. Difficulties were encountered in welding the guide and ejector rails, causing high rejection rates. Instead of halting production, a heavy machined receiver was substituted for the sheet metal receiver. This was a more costly process, but the use of machined receivers accelerated production as tooling and labor for the earlier Mosin–Nagant rifle's machined receiver were easily adapted. Partly because of these problems, the Soviets were not able to distribute large numbers of the new rifle to soldiers until 1956. During this time, production of the interim SKS rifle continued.
Once manufacturing difficulties had been overcome, a redesigned version designated the AKM (M for 'modernized' or 'upgraded'; in Russian: 'Автомат Калашникова Модернизированный [Avtomat Kalashnikova Modernizirovanniy])' was introduced in 1959. This new model used a stamped sheet metal receiver and featured a slanted muzzle brake on the end of the barrel to compensate for muzzle rise under recoil. In addition, a hammer retarder was added to prevent the weapon from firing out of battery (without the bolt being fully closed), during rapid or automatic fire. This is also sometimes referred to as a 'cyclic rate reducer', or simply 'rate reducer', as it also has the effect of reducing the number of rounds fired per minute during automatic fire. It was also roughly one-third lighter than the previous model.
Both licensed and unlicensed production of the Kalashnikov weapons abroad were almost exclusively of the AKM variant, partially due to the much easier production of the stamped receiver. This model is the most commonly encountered, having been produced in much greater quantities. All rifles based on the Kalashnikov design are frequently referred to as AK-47s in the West, although this is only correct when applied to rifles based on the original three receiver types. In most former Eastern Bloc countries, the weapon is known simply as the 'Kalashnikov' or 'AK'. The photo above at right illustrates the differences between the Type 2 milled receiver and the Type 4 stamped, including the use of rivets rather than welds on the stamped receiver, as well as the placement of a small dimple above the magazine well for stabilization of the magazine.
In 1974, the Soviets began replacing their AK-47 and AKM rifles with a newer design, the AK-74, which uses 5.45×39mm ammunition. This new rifle and cartridge had only started to be manufactured in Eastern European nations when the Soviet Union collapsed, drastically slowing production of the AK-74 and other weapons of the former Soviet bloc.
Features.
The AK-47 was designed to be a simple, reliable automatic rifle that could be manufactured quickly and cheaply, using mass production methods that were state of the art in the Soviet Union during the late 1940s. The large gas piston, generous clearances between moving parts, and tapered cartridge case design allow the gun to endure large amounts of foreign matter and fouling without failing to cycle. This reliability comes at the expense of accuracy, as the looser tolerances do not allow for precision and consistency.
Operating cycle.
The AK-47 uses a long stroke gas system. To fire, the operator inserts a loaded magazine, pulls back and releases the charging handle, and then pulls the trigger. In semi-automatic, the firearm fires only once, requiring the trigger to be released and depressed again for the next shot. In full-automatic, the rifle continues to fire automatically cycling fresh rounds into the chamber, until the magazine is exhausted or pressure is released from the trigger. As each bullet travels through the barrel, a portion of the gases expanding behind it is diverted into the gas tube above the barrel, where it acts on the gas piston. The piston, in turn, is driven backward, pushing the bolt carrier, which causes the bolt to move backwards, ejecting the spent round, and chambering a new round when the recoil spring pushes it forward.
Fire selector.
The prototype of the AK-47, had a separate fire selector and safety. These were later combined in the production version to simplify the design. The fire selector is a large lever located on the right side of the rifle, it acts as a dust-cover and prevents the charging handle from being pulled fully to the rear when it is on safe. It is operated by the shooter's right fore-fingers and it has 3 settings: safe (up), full-auto (center), and semi-auto (down). The reason for this is, under stress a soldier will push the selector lever down with considerable force bypassing the full-auto stage and setting the rifle to semi-auto. To set the AK-47 to full-auto requires the deliberate action of centering the selector lever.
Some AK-type rifles also have a small vertical selector lever on the left side of the receiver just above the pistol grip. This lever is operated by the shooter's right thumb and has three settings: safe (forward), full-auto (center), and semi-auto (backward).
Sights.
The AK-47 has a sight radius. The AK-47 uses a notched rear tangent iron sight, it is adjustable and is calibrated in hundreds from 100 to 800 metres (100 to 1000 metres for AKM models). The front sight is a post adjustable for elevation in the field. Horizontal adjustment is done by the armory before issue. The 'fixed' battle setting can be used for all ranges up to 300 metres. This 'point-blank range' setting marked 'П', allows the shooter to fire at close range targets without adjusting the sights. These settings mirror the Mosin–Nagant and SKS rifles which the AK-47 replaced. Some AK-type rifles have a front sight with a flip-up luminous dot that is calibrated at 50 metres, for improved night fighting.
Side rail.
All current AKs (100 series) and some older models, have side rails for mounting a variety of scopes and sighting devices, such as the PSO-1 Optical Sniper Sight.
The side rails, allow for removal and remounting of optical accessories without interfering with the zeroing of the optic. However, the 100 series side folding stocks cannot be folded with the optics mounted.
Terminal ballistics.
The AK fires the 7.62×39mm cartridge with a muzzle velocity of . The cartridge weight is , the projectile weight is . The AK has excellent penetration when shooting through heavy foliage, walls or a common vehicle's metal body and into an opponent attempting to use these things as cover. The 7.62x39mm M43 projectile does not generally fragment when striking an opponent and has an unusual tendency to remain intact even after making contact with bone. The 7.62x39mm round produces significant wounding in cases where the bullet tumbles in tissue, but produces relatively minor wounds in cases where the bullet exits before beginning to yaw. In the absence of yaw, the M43 round can pencil through tissue with relatively little injury.
Most, if not all, of the 7.62x39mm ammunition found today is of the upgraded M67 variety. This variety deleted the steel insert, shifting the center of gravity rearward, and allowing the projectile to destabilize (or yaw) at about , nearly earlier in tissue than the M43 round. This change also reduces penetration in ballistic gelatin to ~ for the newer M67 round versus ~ for the older M43 round. However, the wounding potential of M67 is mostly limited to the small permanent wound channel the bullet itself makes, especially when the bullet yaws (tumbles).
Accuracy.
The AK-47's accuracy has always been considered to be 'good enough' to hit an adult male torso out to about 300 meters. 'At 300 meters, expert shooters (firing AK-47s) at prone or at bench rest positions had difficulty putting ten consecutive rounds on target.' Despite the Soviet engineers best efforts and 'no matter the changes, the AK-47's accuracy could not be significantly improved; when it came to precise shooting, it was a stubbornly mediocre arm.' An AK can fire a 10 shot group of at 100 meters, and at 300 meters Curiously, the newer stamped steel receiver AKM models are actually less accurate than their predecessors. 'There are advantages and disadvantages in both forged/milled receivers and stamped receivers. Milled/Forged Receivers are much more rigid, flexing less as the rifle is fired thus not hindering accuracy as much as stamped receivers. Stamped receivers on the other hand are a bit more rugged since it has some give in it and have less chances of having metal fatigue under heavy usage.' As a result, the milled AK-47's are capable of shooting 3–5 inch groups at 100 yards, whereas the stamped AKM's are capable of shooting 4–6 inch groups at 100 yards. The best shooters are able to hit a man-sized target at 800 metres within five shots (firing from prone or bench rest position) or ten shots (standing).
Magazines.
The standard magazine capacity is 30 rounds. There are also 10, 20 and 40-round box magazines, as well as 75-round drum magazines.
The AK-47's 30-round magazines have a pronounced curve that allows them to smoothly feed ammunition into the chamber. Their heavy steel construction combined with 'feed-lips' (the surfaces at the top of the magazine that control the angle at which the cartridge enters the chamber) machined from a single steel billet makes them highly resistant to damage. These magazines are so strong that 'Soldiers have been known to use their mags as hammers, and even bottle openers.' This makes the AK-47 magazine more reliable, although heavier than U.S. and NATO magazines. The early slab-sided steel AK-47 magazines weigh empty. The later steel AKM magazines had lighter sheet-metal bodies with prominent reinforcing ribs weighing empty. The current issue steel-reinforced plastic magazines are even lighter, weighing empty. Early steel AK-47 magazines are 9.75 inches long, and the later ribbed steel AKM and newer plastic magazines are about an inch shorter.
Most Yugoslavian and some East German AK magazines were made with cartridge followers that hold the bolt open when empty; however, most AK magazine followers allow the bolt to close when the magazine is empty.
Additional firepower.
All current model AK-47 rifles can mount under-barrel 40 mm grenade launchers such as the BG-15, GP-25, GP-30 & GP-34, which can fire up to 20 rounds per minute and have an effective range of up to 400 metres. The main grenade is the VOG-25 (VOG-25M) fragmentation grenade which has a 6 m (9 m) (20 ft (30 ft)) lethality radius. The VOG-25P/VOG-25PM ('jumping') variant explodes above the ground.
The Zastava M70s (AK-type rifle) also have a grenade-launching sight and gas cut-off on the gas block, and are capable of launching rifle grenades. To launch them a 22 mm diameter grenade launching adapter is screwed on in place of the slant brake or other muzzle device. Other AK-47 variants tuned for launching rifle grenades are the Polish Kbkg wz. 1960/72 and the Hungarian AMP-69.
The AK-47 can also mount a (rarely used) cup-type grenade launcher that fires standard RGD-5 Soviet hand-grenades. The maximum effective range is approximately 150 meters. This cup-type launcher can also be used to launch tear-gas and riot control grenades.
Service life.
The AK-47 and its variants are made in dozens of countries, with 'quality ranging from finely engineered weapons to pieces of questionable workmanship.' As a result, the AK-47 has a service/system life of approximately 6,000, to 10,000, to 15,000 rounds. The AK-47 was designed to be a cheap, simple, easy to manufacture assault rifle, perfectly matching Soviet military doctrine that treats equipment and weapons as disposable items. As units are often deployed without adequate logistical support and dependent on 'battlefield cannibalization' for resupply, it is actually more cost-effective to replace rather than repair weapons.
The AK-47 has small parts and springs that need to be replaced every few thousand rounds. However..'Every time it is disassembled beyond the field stripping stage, it will take some time for some parts to regain their fit, some parts may tend to shake loose and fall out when firing the weapon. Some parts of the AK-47 line are riveted together. Repairing these can be quite a hassle, since the end of the rivet has to be ground off and a new one set after the part is replaced.'
Variants.
Early variants (7.62×39mm)
Modernized (7.62×39mm)
Low-impulse variants (5.45×39mm)
The 100 Series
5.45×39mm / 5.56×45mm / 7.62×39mm
Other weapons
AK-12 series
Production outside of the Soviet Union/Russia.
Military variants only. Includes new designs substantially derived from the Kalashnikov.
Certainly more have been produced elsewhere; but the above list represents known producers and is limited to only military variants. An updated AK-47 design – the AK-103 – is still produced in Russia.
Derivatives.
The basic design of the AK-47 has been used as the basis for other successful rifle designs such as the Finnish Rk 62/76 and Rk 95 Tp, the Israeli Galil, the Indian INSAS and the Yugoslav Zastava M76 and M77/82 rifles. Several bullpup designs have surfaced such as the Chinese Norinco Type 86S, although none have been produced in quantity. Bullpup conversions are also available commercially.
Licensing.
OJSC IzhMash has repeatedly claimed that the majority of manufacturers produce AK-47s without a proper license from IZH. The Izhevsk Machine Tool Factory acquired a patent in 1999, making manufacture of the newest Kalashnikov rifles, such as AK-100s by anyone other than themselves illegal in countries where a patent is granted. However, older variants, such as AK and AKM are public domain due to age of design.
Illicit trade.
Throughout the world, the AK and its variants are among the most commonly smuggled small arms sold to governments, rebels, criminals, and civilians alike, with little international oversight. In some countries, prices for AKs are very low; in Somalia, Rwanda, Mozambique, Congo and Tanzania prices are between $30 and $125 per weapon, and prices have fallen in the last few decades due to mass counterfeiting. Moisés Naím observed that in a small town in Kenya in 1986, an AK-47 cost fifteen cows but that in 2005, the price was down to four cows indicating that supply was 'immense'. The weapon has appeared in a number of conflicts including clashes in the Balkans, Iraq, Afghanistan, and Somalia.
The Taliban and the Northern Alliance fought each other with Soviet AKs; some of these were exported to Pakistan. The gun is now also made in Pakistan's semi-autonomous areas (see Khyber Pass Copy). 'The Distribution of Iranian Ammunition in Africa', by the private British arms-tracking group Conflict Armament Research (CAR), shows how Iran broke trade embargos and infiltrated African markets with massive amounts of illegal, unmarked 7.62 mm rounds for the Kalashnikov-style AK-47 rifles.'
Estimated numbers of AK-type weapons vary. The Small Arms Survey suggest that 'between 70 and 100 million of these weapons have been produced since 1947.' The World Bank estimates that out of the 500 million total firearms available worldwide, 100 million are of the Kalashnikov family, and 75 million are AK-47s. Because AK-type weapons have been made in other countries, often illicitly, it is impossible to know how many really exist.
Cultural influence.
Russia/Soviet Union and the People's Republic of China, as well as Western countries (especially the United States) supplied arms and technical knowledge to numerous countries and rebel forces in a global struggle between the Warsaw Pact nations and their allies against NATO and their allies called the Cold War. While the NATO countries used rifles such as the relatively expensive M14, FN FAL, HK G3 and M16 assault rifle during this time, the low production and materials costs of the AK-47 meant that the Russia/USSR could produce and supply its allies at a very low cost. Because of its low cost, it was also duplicated or used as the basis for many other rifles (see List of weapons influenced by the Kalashnikov design), such as the Israeli Galil, Chinese Type 56, and Swiss SIG SG 550. As a result, the Cold War saw the mass export of AK-47s by the Soviet Union and the PRC to their allies, such as the Nicaraguan Sandinistas, Viet Cong as well as Middle Eastern, Asian, and African revolutionaries. The United States also purchased the Type 56 from the PRC to give to the mujahideen guerrillas during the Soviet war in Afghanistan.
The proliferation of this weapon is reflected by more than just numbers. The AK-47 is included in the flag of Mozambique and its emblem, an acknowledgment that the country's leaders gained power in large part through the effective use of their AK-47s. It is also found in the coats of arms of East Timor, the revolution era coat of arms of Burkina Faso and the flag of Hezbollah.
In parts of the Western world, the AK-47 is associated with their enemies; both Cold War era and present-day. In the pro-communist states, the AK-47 became a symbol of third-world revolution. During the 1980s, the Soviet Union became the principal arms dealer to countries embargoed by Western nations, including Middle Eastern nations such as Syria, Libya and Iran, who welcomed Soviet Union backing against Israel. After the fall of the Soviet Union, AK-47s were sold both openly and on the black market to any group with cash, including drug cartels and dictatorial states, and more recently they have been seen in the hands of Islamic groups such as the Taliban and Al-Qaeda in Afghanistan and Iraq, and FARC, Ejército de Liberación Nacional guerrillas in Colombia. Western movies often portray criminals, gang members and terrorists using AK-47s. For these reasons, in the U.S. and Western Europe the AK-47 is stereotypically regarded as the weapon of choice of insurgents, gangsters and terrorists. Conversely, throughout the developing world, the AK-47 can be positively attributed with revolutionaries against foreign occupation, imperialism, or colonialism.
In Mexico, the AK-47 is known as 'Cuerno de Chivo' (literally 'Ram's Horn') because of its curved magazine design and is one of the weapons of choice of Mexican drug cartels. It is sometimes mentioned in Mexican folk music lyrics.
In 2006, Colombian musician and peace activist César López devised the 'escopetarra', an AK converted into a guitar. One sold for US$17,000 in a fundraiser held to benefit the victims of anti-personnel mines, while another was exhibited at the United Nations' Conference on Disarmament.
The AK-47 made an appearance in U.S. popular culture as a recurring focus in the 2005 Nicolas Cage film 'Lord of War'. There are numerous monologues in the movie focusing on the weapon and its effects on global conflict and the gun running market, such as:
'Of all the weapons in the vast Soviet arsenal, nothing was more profitable than Avtomat Kalashnikova model of 1947. More commonly known as the AK-47, or Kalashnikov. It's the world's most popular assault rifle. A weapon all fighters love. An elegantly simple 9 pound amalgamation of forged steel and plywood. It doesn't break, jam, or overheat. It'll shoot whether it's covered in mud or filled with sand. It's so easy, even a child can use it; and they do. The Soviets put the gun on a coin. Mozambique put it on their flag. Since the end of the Cold War, the Kalashnikov has become the Russian people's greatest export. After that comes vodka, caviar, and suicidal novelists. One thing is for sure, no one was lining up to buy their cars.'
Kalashnikov Museum.
The Kalashnikov Museum (also called the AK-47 museum) opened on 4 November 2004, in Izhevsk, Udmurt Republic. This city is in the Ural Region of Russia. The museum chronicles the biography of General Kalashnikov, and documents the invention of the AK-47. The museum complex of small arms of M. T. Kalashnikov, a series of halls and multimedia exhibitions is devoted to the evolution of the AK-47 assault rifle and attracts 10,000 monthly visitors.
Nadezhda Vechtomova, the museum director stated in an interview that the purpose of the museum is to honor the ingenuity of the inventor and the hard work of the employees and to 'separate the weapon as a weapon of murder from the people who are producing it and to tell its history in our country.'
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1349'>
Atanasoff–Berry computer
The Atanasoff–Berry computer (ABC) was the first automatic electronic digital computer, an early electronic digital computing device that has remained somewhat obscure. To say that it was the first is a debate among historians of computer technology. Most would probably credit John Mauchly and J. Presper Eckert, creators of the ENIAC, with the title. Still, other would argue that the credit undisputedly belongs to Iowa State mathematics and physics professor John Vincent Atanasoff for his work with the 'ABC,' with the help of graduate student Clifford Berry. Conceived in 1937, the machine was not programmable, being designed only to solve systems of linear equations. It was successfully tested in 1942. However, its intermediate result storage mechanism, a paper card writer/reader, was unreliable, and when John Vincent Atanasoff left Iowa State College for World War II assignments, work on the machine was discontinued. The ABC pioneered important elements of modern computing, including binary arithmetic and electronic switching elements, but its special-purpose nature and lack of a changeable, stored program distinguish it from modern computers. The computer was designated an IEEE Milestone in 1990.
Atanasoff and Berry's computer work was not widely known until it was rediscovered in the 1960s, amidst conflicting claims about the first instance of an electronic computer. At that time, the ENIAC was considered to be the first computer in the modern sense, but in 1973 a U.S. District Court invalidated the ENIAC patent and concluded that the ENIAC inventors had derived the subject matter of the electronic digital computer from Atanasoff (see Patent dispute).
Design and construction.
According to Atanasoff's account, several key principles of the Atanasoff–Berry Computer were conceived in a sudden insight after a long nighttime drive to Rock Island, Illinois, during the winter of 1937–38. The ABC innovations included electronic computation, binary arithmetic, parallel processing, regenerative capacitor memory, and a separation of memory and computing functions. The mechanical and logic design was worked out by Atanasoff over the next year. A grant application to build a proof of concept prototype was submitted in March 1939 to the Agronomy department which was also interested in speeding up computation for economic and research analysis. $5,000 of further funding to complete the machine came from the nonprofit Research Corporation of New York City.
The ABC was built by Atanasoff and Berry in the basement of the physics building at Iowa State College during 1939–42. The initial funds were released in September, and the 11-tube prototype was first demonstrated in October, 1939. A December demonstration prompted a grant for construction of the full-scale machine. The ABC was built and tested over the next two years. A January 15, 1941 story in the 'Des Moines Register' announced the ABC as 'an electrical computing machine' with more than 300 vacuum tubes that would 'compute complicated algebraic equations' (but gave no precise technical description of the computer). The system weighed more than seven hundred pounds (320 kg). It contained approximately 1 mile (1.6 km) of wire, 280 dual-triode vacuum tubes, 31 thyratrons, and was about the size of a desk.
It was not a Turing complete computer, which distinguishes it from more general machines, like contemporary Konrad Zuse's Z3 (1941), or later machines like the 1946 ENIAC, 1949 EDVAC, the University of Manchester designs, or Alan Turing's post-War design of ACE at NPL and elsewhere. Nor did it implement the stored program architecture that made practical fully general-purpose, reprogrammable computers.
The machine was, however, the first to implement three critical ideas that are still part of every modern computer:
In addition, the system pioneered the use of regenerative capacitor memory, as in the DRAM still widely used today.
The memory of the Atanasoff–Berry Computer was a pair of drums, each containing 1600 capacitors that rotated on a common shaft once per second. The capacitors on each drum were organized into 32 'bands' of 50 (30 active bands and 2 spares in case a capacitor failed), giving the machine a speed of 30 additions/subtractions per second. Data was represented as 50-bit binary fixed point numbers. The electronics of the memory and arithmetic units could store and operate on 60 such numbers at a time (3000 bits).
The alternating current power line frequency of 60 Hz was the primary clock rate for the lowest level operations.
The arithmetic logic functions were fully electronic, implemented with vacuum tubes. The family of logic gates ranged from inverters to two and three input gates. The input and output levels and operating voltages were compatible between the different gates. Each gate consisted of one inverting vacuum tube amplifier, preceded by a resistor divider input network that defined the logical function. The control logic functions, which only needed to operate once per drum rotation and therefore did not require electronic speed, were electromechanical, implemented with relays.
Although the Atanasoff–Berry Computer was an important step up from earlier calculating machines, it was not able to run entirely automatically through an entire problem. An operator was needed to operate the control switches to set up its functions, much like the electro-mechanical calculators and unit record equipment of the time. Selection of the operation to be performed, reading, writing, converting to or from binary to decimal, or reducing a set of equations was made by front panel switches and in some cases jumpers.
There were two forms of input and output: primary user input and output and an intermediate results output and input. The intermediate results storage allowed operation on problems too large to be handled entirely within the electronic memory. (The largest problem that could be solved without the use of the intermediate output and input was two simultaneous equations, a trivial problem.)
Intermediate results were binary, written onto paper sheets by electrostatically modifying the resistance at 1500 locations to represent 30 of the 50 bit numbers (one equation). Each sheet could be written or read in one second. The reliability of the system was limited to about 1 error in 100,000 calculations by these units, primarily attributed to lack of control of the sheets' material characteristics. In retrospect a solution could have been to add a parity bit to each number as written. This problem was not solved by the time Atanasoff left the university for war-related work.
Primary user input was decimal, via standard IBM 80 column punched cards and output was decimal, via a front panel display.
Function.
The ABC was designed for a specific purpose, the solution of systems of simultaneous linear equations. It could handle systems with up to twenty-nine equations, a difficult problem for the time. Problems of this scale were becoming common in physics, the department in which John Atanasoff worked. The machine could be fed two linear equations with up to twenty-nine variables and a constant term and eliminate one of the variables. This process would be repeated manually for each of the equations, which would result in a system of equations with one fewer variable. Then the whole process would be repeated to eliminate another variable.
George W. Snedecor, the head of Iowa State's Statistics Department, was very likely the first user of an electronic digital computer to solve real world mathematics problems. He submitted many of these problems to Atanasoff.
Patent dispute.
On June 26, 1947, J. Presper Eckert and John Mauchly were the first to patent a digital computing device (ENIAC), much to the surprise of Atanasoff. The ABC had been examined by John Mauchly in June 1941, and Isaac Auerbach, a former student of Mauchly's, alleged that it influenced his later work on ENIAC, although Mauchly denied this (Shurkin, pg. 280-299). In 1967 Honeywell sued Sperry Rand in an attempt to break their ENIAC patents, arguing the ABC constituted prior art. The United States District Court for the District of Minnesota released its judgement on October 19, 1973, finding in 'Honeywell v. Sperry Rand' that the ENIAC patent was a derivative of John Atanasoff's invention.
Campbell-Kelly and Aspray conclude:
The case was legally resolved on October 19, 1973 when U.S. District Judge Earl R. Larson held the ENIAC patent invalid, ruling that the ENIAC derived many basic ideas from the Atanasoff–Berry Computer. Judge Larson explicitly stated, 'Eckert and Mauchly did not themselves first invent the automatic electronic digital computer, but instead derived that subject matter from one Dr. John Vincent Atanasoff'.
Replica.
The original ABC was eventually dismantled, when the University converted the basement to classrooms, and all of its pieces except for one memory drum were discarded. In 1997, a team of researchers led by John Gustafson from Ames Laboratory (located on the Iowa State campus) finished building a working replica of the Atanasoff–Berry Computer at a cost of $350,000. The replica ABC is now on permanent display in the first floor lobby of the Durham Center for Computation and Communication at Iowa State University. As of May 2012, it is on loan to the Computer History Museum in Mountain View, California for a major exhibition.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1354'>
Andes
The Andes is the longest continental mountain range in the world. It is a continual range of highlands along the western coast of South America. This range is about long, about to wide (widest between 18° south and 20° south latitude), and of an average height of about . The Andes extend from north to south through seven South American countries: Venezuela, Colombia, Ecuador, Peru, Bolivia, Chile, and Argentina.
Along its length, the Andes is split into several ranges, which are separated by intermediate depressions. The Andes is the location of several high plateaux – some of which host major cities such as Quito, Bogotá, Arequipa, Medellín, Sucre, Mérida, and La Paz. The Altiplano plateau is the world's second-highest following the Tibetan plateau. These ranges are in turn grouped into three major divisions based on climate: the Tropical Andes, the Dry Andes, and the Wet Andes.
The Andes is the world's highest mountain range outside of Asia. The highest peak, Mount Aconcagua, rises to an elevation of about above sea level. The peak of Chimborazo in the Ecuadorean Andes is farther from Earth's center than any other location on Earth's surface, due to the equatorial bulge resulting from Earth's rotation. The world's highest volcanoes are in the Andes, including Ojos del Salado on the Chile-Argentina border which rises to 6,893 m (22,615 ft). Over 50 other Andean volcanoes rise above 6,000 m (19,685 ft). The peak of Alpamayo in the Andes of Peru rises to an elevation of 5,947 m (19,511 ft).
Name.
The etymology of the word 'Andes' has been debated. The major consensus is that it derives from the Quechua word 'anti' which means 'east' as in 'Antisuyu' (Quechua for 'east region'), one of the four regions of the Inca Empire. Derivation from the Spanish 'andén' (in the sense of cultivation terrace) has also been proposed, yet considered very unlikely.
Geography.
The Andes can be divided into three sections:
In the northern part of the Andes, the isolated Sierra Nevada de Santa Marta range is often considered to be part of the Andes. The term 'cordillera' comes from the Spanish word 'cuerda', meaning 'rope'. The Andes range is about wide throughout its length, except in the Bolivian flexure where it is about wide. The Leeward Antilles islands Aruba, Bonaire, and Curaçao, which lie in the Caribbean Sea off the coast of Venezuela, were thought to represent the submerged peaks of the extreme northern edge of the Andes range, but ongoing geological studies indicate that such a simplification does not do justice to the complex tectonic boundary between the South American and Caribbean plates.
Geology.
The Andes are a Mesozoic–Tertiary orogenic belt of mountains along the Pacific Ring of Fire, a zone of volcanic activity that encompasses the Pacific rim of the Americas as well as the Asia-Pacific region. The Andes are the result of plate tectonics processes, caused by the subduction of oceanic crust beneath the South American plate. The main cause of the rise of the Andes is the compression of western rim of the South American Plate due to the subduction of the Nazca Plate and the Antarctic Plate. To the east, the Andes range is bounded by several sedimentary basins such as Orinoco, Amazon Basin, Madre de Dios and Gran Chaco which separates the Andes from the ancient cratons in eastern South America. In the south the Andes shares a long boundary with the former Patagonia Terrane. To the west, the Andes end at the Pacific Ocean, although the Peru-Chile trench can be considered its ultimate western limit. From a geographical approach the Andes are considered to have their western boundaries marked by the appearance of coastal lowlands and a less rugged topography.
Orogeny.
The western rim of the South American Plate has been the place of several pre-Andean orogenies since at least the period of the late Proterozoic and early Paleozoic when several terranes and microcontinents collided and amalgamated with the ancient cratons of eastern South America, by then the South American part of Gondwana.
The formation of the modern Andes began with the events of the Triassic when Pangea began to break up and several rifts developed. It continued through the Jurassic Period. It was during the Cretaceous Period that the Andes began to take its present form, by the uplifting, faulting and folding of sedimentary and metamorphic rocks of the ancient cratons to the east. The rise of the Andes has not been constant and different regions have had different degrees of tectonic stress, uplift, and erosion.
Tectonic forces above the subduction zone along the entire west coast of South America where the Nazca Plate and a part of the Antarctic Plate are sliding beneath the South American Plate continue to produce an ongoing orogenic event resulting in minor to major earthquakes and volcanic eruptions to this day. In the extreme south a major transform fault separates Tierra del Fuego from the small Scotia Plate. Across the wide Drake Passage lie the mountains of the Antarctic Peninsula south of the Scotia Plate which appear to be a continuation of the Andes chain.
Volcanism.
The Andes range has many active volcanoes, which are distributed in four volcanic zones separated by areas of inactivity. The Andean volcanism is a result of subduction of the Nazca Plate and Antarctic Plate underneath the South American Plate. The belt is subdivided into four main volcanic zones that are separated from each other by volcanic gaps. The volcanoes of the belt are diverse in terms of activity style, products and morphology. While some differences can be explained by which volcanic zone a volcano belongs to, there are significant differences inside volcanic zones and even between neighbouring volcanoes. Despite being a type location for calc-alkalic and subduction volcanism, the Andean Volcanic Belt has a large range of volcano-tectonic settings, such as rift systems and extensional zones, transpersonal faults, subduction of mid-ocean ridges and seamount chains apart from a large range of crustal thicknesses and magma ascent paths, and different amount of crustal assimilations.
Ore deposits and evaporites.
The Andes mountains host large ore and salt deposits and some of its eastern fold and thrust belt acts as traps for commercially exploitable amounts of hydrocarbons. In the forelands of the Atacama desert some of the largest porphyry copper mineralizations occurs making Chile and Peru the first and second largest exporters of copper in the world. Porphyry copper in the western slopes of the Andes has been generated by hydrothermal fluids (mostly water) during the cooling of plutons or volcanic systems. The porphyry mineralization further benefited from the dry climate that let them largely out of the disturbing actions of meteoric water. The dry climate in the central western Andes have also led to the creation of extensive saltpeter deposits which were extensively mined until the invention of synthetic nitrates. Yet another result of the dry climate are the salars of Atacama and Uyuni, the first one being the largest source of lithium today and the second the world’s largest reserve of the element. Early Mesozoic and Neogene plutonism in Bolivia's Cordillera Central created the Bolivian tin belt as well as the famous, now depleted, deposits of Cerro Rico de Potosí.
Climate and hydrology.
The climate in the Andes varies greatly depending on latitude, altitude, and proximity to the sea. Temperature, atmospheric pressure and humidity decrease in higher elevations. The southern section is rainy and cool, the central Andes are dry. The northern Andes are typically rainy and warm, with an average temperature of in Colombia. The climate is known to change drastically in rather short distances. Rainforests exist just miles away from the snow covered peak Cotopaxi. The mountains have a large effect on the temperatures of nearby areas. The snow line depends on the location. It is at between 4,500 and 4,800 m (14,800–15,800 ft) in the tropical Ecuadorian, Colombian, Venezuelan, and northern Peruvian Andes, rising to 4,800–5,200 m (15,800–17,060 ft) in the drier mountains of southern Peru south to northern Chile south to about 30°S, then descending to on Aconcagua at 32°S, at 40°S, at 50°S, and only in Tierra del Fuego at 55°S; from 50°S, several of the larger glaciers descend to sea level.
The Andes of Chile and Argentina can be divided in two climatic and glaciological zones; the Dry Andes and the Wet Andes. Since the Dry Andes extends from the latitudes of Atacama Desert to the area of Maule River, precipitation is more sporadic and there are strong temperature oscillations. The line of equilibrium may shift drastically over short periods of time, leaving a whole glacier in the ablation area or in the accumulation area.
In the high Andes of central Chile and Mendoza Province rock glaciers are larger and more common than glaciers; this is due to the high exposure to solar radiation.
Though precipitation increases with the height, there are semiarid conditions in the nearly 7000 m towering highest mountains of the Andes. This dry steppe climate is considered to be typically of the subtropic position at 32-34° S. Therefore in the valley bottoms do not grow woods but only dwarf scrub. The largest glaciers, as e.g. the Plomo glacier and the Horcones glaciers do not even reach 10 km in length and have an only insignificant ice thickness. At glacial times, however, c. 20 000 years ago, the glaciers were over ten times longer. On the east side of this section of the Mendozina Andes they flowed down to 2060 m and on the west side to c. 1220 m asl. The massifs of Cerro Aconcagua (6962 m), Cerro Tupungato (6550 m) and Nevado Juncal (6110 m) are tens of kilometres away from each other and were connected by a joint ice stream network. Its dendritic glacier arms, i.e. components of valley glaciers, were up to 112.5 km long, over 1020, i.e. 1250 m thick and overspanned a vertical distance of 5150 altitude metres. The climatic glacier snowline (ELA) was lowered from currently 4600 m to 3200 m at glacial times.
Flora.
The Andean region cuts across several natural and floristic regions due to its extension from Caribbean Venezuela to cold, windy and wet Cape Horn passing through the hyperarid Atacama Desert. Rainforests used to encircle much of the northern Andes but are now greatly diminished, especially in the Chocó and inter-Andean valleys of Colombia. As a direct opposite of the humid Andean slopes are the relatively dry Andean slopes in most of western Peru, Chile and Argentina. Along with several Interandean Valles, they are typically dominated by deciduous woodland, shrub and xeric vegetation, reaching the extreme in the slopes near the virtually lifeless Atacama Desert.
About 30,000 species of vascular plants live in the Andes with roughly half being endemic to the region, surpassing the diversity of any other hotspot. The small tree 'Cinchona pubescens', a source of quinine which is used to treat malaria, is found widely in the Andes as far south as Bolivia. Other important crops that originated from the Andes are tobacco and potatoes. The high-altitude 'Polylepis' forests and woodlands are found in the Andean areas of Colombia, Ecuador, Peru, Bolivia and Chile. These trees, by locals referred to as Queñua, Yagual and other names, can be found at altitudes of above sea level. It remains unclear if the patchy distribution of these forests and woodlands is natural, or the result of clearing which began during the Incan period. Regardless, in modern times the clearance has accelerated, and the trees are now considered to be highly endangered, with some believing that as little as 10% of the original woodland remains.
Fauna.
The Andes is rich in fauna: With almost 3,500 species, of which roughly 2/3 are endemic to the region, the Andes is the most important region in the world for amphibians.
The diversity of animals in the Andes is high, with almost 600 species of mammals (13% endemic), more than 1,700 species of birds (about 1/3 endemic), more than 600 species of reptile (about 45% endemic), and almost 400 species of fish (about 1/3 endemic).
The Vicuña and Guanaco can be found living in the Altiplano, while the closely related domesticated Llama and Alpaca are widely kept by locals as pack animals and for their meat and wool. The crepuscular (active during dawn and dusk) chinchillas, two threatened members of the rodent order, inhabit the Andes' alpine regions. The Andean Condor, the largest bird of its kind in the Western Hemisphere, occurs throughout much of the Andes but generally in very low densities. Other animals found in the relatively open habitats of the high Andes include the huemul, cougar, foxes in the genus 'Pseudalopex', and, for birds, certain species of tinamous (notably members of the genus 'Nothoprocta'), Andean Goose, Giant Coot, flamingos (mainly associated with hypersaline lakes), Lesser Rhea, Andean Flicker, Diademed Sandpiper-plover, miners, sierra-finches and diuca-finches.
Lake Titicaca hosts several endemics, among them the highly endangered Titicaca Flightless Grebe and Titicaca Water Frog. A few species of hummingbirds, notably some hillstars, can be seen at altitudes above , but far higher diversities can be found at lower altitudes, especially in the humid Andean forests ('cloud forests') growing on slopes in Colombia, Ecuador, Peru, Bolivia and far northwestern Argentina. These forest-types, which includes the Yungas and parts of the Chocó, are very rich in flora and fauna, although few large mammals exists, exceptions being the threatened Mountain Tapir, Spectacled Bear and Yellow-tailed Woolly Monkey.
Birds of humid Andean forests include mountain-toucans, quetzals and the Andean Cock-of-the-rock, while mixed species flocks dominated by tanagers and Furnariids commonly are seen - in contrast to several vocal but typically cryptic species of wrens, tapaculos and antpittas.
A number of species such as the Royal Cinclodes and White-browed Tit-spinetail are associated with 'Polylepis', and consequently also threatened.
Human activity.
The Andes mountains form a north-south axis of cultural influences. A long series of cultural development culminated in the expansion of the Inca civilization and Inca Empire in the central Andes during the 15th century. The Incas formed this civilization through imperialistic militarism as well as careful and meticulous governmental management. The government sponsored the construction of aqueducts and roads in addition to preexisting installations. Some of these constructions are still in existence today.
Devastated by European diseases to which they had no immunity, and civil wars, in 1532 the Incas were defeated by an alliance composed of tens of thousands allies from nations they had subjugated (e.g. Huancas, Chachapoyas, Cañaris) and a small army of 180 Spaniards led by Francisco Pizarro. One of the few Inca sites the Spanish never found in their conquest was Machu Picchu, which lay hidden on a peak on the eastern edge of the Andes where they descend to the Amazon. The main surviving languages of the Andean peoples are those of the Quechua and Aymara language families. Woodbine Parish and Joseph Barclay Pentland surveyed a large part of the Bolivian Andes from 1826 to 1827.
In modern times, the largest Andean cities are Bogotá, Colombia, with a population of about eight million, Santiago de Chile, and Medellin, Colombia.
Transportation.
Several major cities are either in the Andes or in the foothills, among which are Bogotá, Medellín and Cali, Colombia; Quito, Ecuador; Mérida, Venezuela; La Paz, Bolivia; Santiago, Chile, and Cusco, Peru. These and most other cities and large towns are connected with asphalt-paved roads, while smaller towns are often connected by dirt roads, which may require a four-wheel-drive vehicle.
The rough terrain has historically put the costs of building highways and railroads that cross the Andes out of reach of most neighboring countries, even with modern civil engineering practices. For example, the main crossover of the Andes between Argentina and Chile is still accomplished through the Paso Internacional Los Libertadores. Only recently the ends of some highways that came rather close to one another from the east and the west have been connected. Much of the transportation of passengers is done via aircraft.
However, there is one railroad that connects Chile with Argentina via the Andes, and there are others that make the same connection via southern Bolivia. See railroad maps of that region.
There is one or more highway in Bolivia that cross the Andes. Some of these were built during a period of war between Bolivia and Paraguay, in order to transport Bolivian troops and their supplies to the war front in the lowlands of southeastern Bolivia and western Paraguay.
For decades, Chile claimed ownership of land on the eastern side of the Andes. However, these claims were given up in about 1870 during the War of the Pacific between Chile, the allied Bolivia and Peru, in a diplomatic deal to keep Argentina out of the war. The Chilean Army and Chilean Navy defeated the combined forces of Bolivia and Peru, and Chile took over Bolivia's only province on the Pacific Coast, some land from Peru that was returned to Peru decades later. Bolivia has been a completely landlocked country ever since. It mostly uses seaports in eastern Argentina and Uruguay for international trade because its diplomatic relations with Chile have been suspended since 1978.
Because of the tortuous terrain in places, villages and towns in the mountains — to which travel via motorized vehicles are of little use — are still located in the high Andes of Argentina, Bolivia, Peru, and Ecuador. Locally, the relatives of the camel, the llama, and the alpaca continue to carry out important uses as pack animals, but this use has generally diminished in modern times. Donkeys, mules, and horses are also useful.
Agriculture.
The ancient peoples of the Andes such as the Incas have practiced irrigation techniques for over 6,000 years. Because of the mountain slopes, terracing has been a common practice. Terracing, however, was only extensively employed after Incan imperial expansions to fuel their expanding realm. The potato holds a very important role as an internally consumed staple crop. Maize was also an important crop for these people, and was used for the production of chicha, important to Andean native people. Currently, tobacco, cotton and coffee are the main export crops. Coca, despite eradication programmes in some countries, remains an important crop for legal local use in a mildly stimulating herbal tea, and, both controversially and illegally, for the production of cocaine.
Mining.
The Andes rose to fame for its mineral wealth during the Spanish conquest of South America. Although Andean Amerindian peoples crafted ceremonial jewelry of gold and other metals the mineralizations of the Andes were first mined in large scale after the Spanish arrival. Potosí in present-day Bolivia and Cerro de Pasco in Peru were one of the principal mines of the Spanish Empire in the New World. Río de la Plata and Argentina derive their names from the silver of Potosí.
Currently, mining in the Andes of Chile and Peru places these countries as the first and third major producers of copper in the world. Peru also contains the largest goldmine in the world: the Yanacocha. The Bolivian Andes produce principally tin although historically silver mining had a huge impact on the economy of 17th century Europe.
There is a long history of mining in the Andes, from the Spanish silver mines in Potosí in the 16th century to the vast current porphyry copper deposits of Chuquicamata and Escondida in Chile and Toquepala in Peru. Other metals including iron, gold and tin in addition to non-metallic resources are important.
Peaks.
This list contains some of the major peaks in the Andes mountain range. The highest peak is Aconcagua of Argentina (see below).
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1356'>
Ancylopoda
Ancylopoda is a group of browsing, herbivorous, mammals in the Perissodactyla that show long, curved and cleft claws. Morphological evidence indicates the Ancylopoda diverged from the tapirs, rhinoceroses and horses (Euperissodactyla) after the Brontotheria, however earlier authorities such as Osborn sometimes considered the Ancylopoda to be outside Perissodactyla or, as was popular more recently, to be related to Brontotheria.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1358'>
Anchor
An anchor is a device normally made of metal, that is used to connect a vessel to the bed of a body of water to prevent the craft from drifting due to wind or current. The word derives from Latin 'ancora', which itself comes from the Greek ἄγκυρα ('ankura').
Anchors can either be temporary or permanent. A permanent anchor is used in the creation of a mooring, and is rarely moved; a specialist service is normally needed to move or maintain it. Vessels carry one or more temporary anchors, which may be of different designs and weights.
A sea anchor is a drogue, not in contact with the seabed, used to control a drifting vessel.
Overview.
Anchors achieve holding power either by 'hooking' into the seabed, or via sheer mass, or a combination of the two. Permanent moorings use large masses (commonly a block or slab of concrete) resting on the seabed. Semi-permanent mooring anchors (such as mushroom anchors) and large ship's anchors derive a significant portion of their holding power from their mass, while also hooking or embedding in the bottom. Modern anchors for smaller vessels have metal flukes which hook on to rocks on the bottom or bury themselves in soft seabed.
The vessel is attached to the anchor by the rode, which is made of chain, cable, rope, or a combination of these. The ratio of the length of rode to the water depth is known as the scope. Anchoring with sufficient scope and/or heavy chain rode brings the direction of strain close to parallel with the seabed. This is particularly important for light, modern anchors designed to bury in the bottom, where scopes of 5– to 7-to-1 are common, whereas heavy anchors and moorings can use a scope of 3-to-1, or less.
Since all anchors that embed themselves in the bottom require the strain to be along the seabed, anchors can be broken out of the bottom by shortening the rope until the vessel is directly above the anchor; at this point the anchor chain is 'up and down', in naval parlance. If necessary, motoring slowly around the location of the anchor also helps dislodge it. Anchors are sometimes fitted with a tripping line attached to the crown, by which they can be unhooked from rocks or coral.
The term 'aweigh' describes an anchor when it is hanging on the rode and is not resting on the bottom. This is linked to the term 'to weigh anchor', meaning to lift the anchor from the sea bed, allowing the ship or boat to move. An anchor is described as 'aweigh' when it has been broken out of the bottom and is being hauled up to be 'stowed'. 'Aweigh' should not be confused with 'under way', which describes a vessel which is not 'moored' to a dock or 'anchored', whether or not the vessel is moving through the water.
Evolution of the anchor.
The earliest anchors were probably rocks, and many rock anchors have been found dating from at least the Bronze Age. Pre European Maori waka(canoes) used 1 or more hollowed stone, tied with flax ropes, as anchors. Many modern moorings still rely on a large rock as the primary element of their design. However, using pure mass to resist the forces of a storm only works well as a permanent mooring; a large enough rock would be nearly impossible to move to a new location.
The ancient Greeks used baskets of stones, large sacks filled with sand, and wooden logs filled with lead. According to Apollonius Rhodius and Stephen of Byzantium, anchors were formed of stone, and Athenaeus states that they were also sometimes made of wood. Such anchors held the vessel merely by their weight and by their friction along the bottom. Iron was afterwards introduced for the construction of anchors, and an improvement was made by forming them with teeth, or 'flukes', to fasten themselves into the bottom.
Admiralty Pattern.
The Admiralty Pattern, 'A.P.', or simply 'Admiralty', and also known as 'Fisherman', is the anchor shape most familiar to non-sailors. It consists of a central shank with a ring or shackle for attaching the rode. At the other end of the shank there are two arms, carrying the flukes, while the stock is mounted to the other end, at ninety degrees to the arms. When the anchor lands on the bottom, it will generally fall over with the arms parallel to the seabed. As a strain comes onto the rode, the stock will dig into the bottom, canting the anchor until one of the flukes catches and digs into the bottom.
This basic design remained unchanged for centuries, with the most significant changes being to the overall proportions, and a move from stocks made of wood to iron stocks. Since one fluke always protrudes up from the set anchor, there is a great tendency of the rode to foul the anchor as the vessel swings due to wind or current shifts. When this happens, the anchor may be pulled out of the bottom, and in some cases may need to be hauled up to be re-set. In the mid-19th century, numerous modifications were attempted to alleviate these problems, as well as improve holding power, including one-armed mooring anchors. The most successful of these 'patent anchors', the Trotman Anchor, introduced a pivot where the arms join the shank, allowing the 'idle' arm to fold against the shank.
Handling and storage of these anchors requires special equipment and procedures. Once the anchor is hauled up to the hawsepipe, the ring end is hoisted up to the end of a timber projecting from the bow known as the cathead. The crown of the anchor is then hauled up with a heavy tackle until one fluke can be hooked over the rail. This is known as 'catting and fishing' the anchor. Before dropping the anchor, the fishing process is reversed, and the anchor is dropped from the end of the cathead.
Stockless anchor.
The stockless anchor, patented in England in 1821, represented the first significant departure in anchor design in centuries. Though their holding-power-to-weight ratio is significantly lower than admiralty pattern anchors, their ease of handling and stowage aboard large ships led to almost universal adoption. In contrast to the elaborate stowage procedures for earlier anchors, stockless anchors are simply hauled up until they rest with the shank inside the hawsepipes, and the flukes against the hull (or inside a recess in the hull).
While there are numerous variations, stockless anchors consist of a set of heavy flukes connected by a pivot or ball and socket joint to a shank. Cast into the crown of the anchor is a set of tripping palms, projections that drag on the bottom, forcing the main flukes to dig in.
Small boat anchors.
Until the mid-20th century, anchors for smaller vessels were either scaled-down versions of admiralty anchors, or simple grapnels. As new designs with greater holding-power-to-weight ratios, a great variety of anchor designs has emerged. Many of these designs are still under patent, and other types are best known by their original trademarked names.
Grapnel anchor.
A traditional design, the grapnel is merely a shank with four or more tines. It has a benefit in that, no matter how it reaches the bottom, one or more tines will be aimed to set. In coral, or rock, it is often able to set quickly by hooking into the structure, but may be more difficult to retrieve. A grapnel is often quite light, and may have additional uses as a tool to recover gear lost overboard. Its weight also makes it relatively easy to move and carry, however its shape is generally not very compact and it may be awkward to stow unless a collapsing model is used.
Grapnels rarely have enough fluke area to develop much hold in sand, clay, or mud. It is not unknown for the anchor to foul on its own rode, or to foul the tines with refuse from the bottom, preventing it from digging in. On the other hand, it is quite possible for this anchor to find such a good hook that, without a trip line from the crown, it is impossible to retrieve.
Herreshoff anchor.
Designed by famous yacht designer L. Francis Herreshoff, this is essentially the same pattern as an admiralty anchor, albeit with small diamond shaped flukes or palms. The novelty of the design lay in the means by which it could be broken down into three pieces for stowage. In use, it still presents all the issues of the admiralty pattern anchor.
Northill anchor.
Originally designed as a lightweight anchor for seaplanes, this design consists of two plow-like blades mounted to a shank, with a folding stock crossing through the crown of the anchor.
CQR (secure) plough anchor.
So named due to its resemblance to a traditional agricultural plough (or more specifically two ploughshares), many manufacturers produce a plough-style design, all based on or direct copies of the original CQR (Secure), a 1933 design patented in the UK (US patent in 1934) by mathematician Geoffrey Ingram Taylor. Ploughs are popular with cruising sailors and other private boaters. They are generally good in all bottoms, but not exceptional in any. The CQR design has a hinged shank, allowing the anchor to turn with direction changes rather than breaking out, while other plough types have a rigid shank. Plough anchors are usually stowed in a roller at the bow.
Owing to the use of lead or other dedicated tip-weight, the plough is heavier than average for the amount of resistance developed, and may take more careful technique and a longer period to set thoroughly. It cannot be stored in a hawsepipe.
Delta anchor.
The Delta was developed in the 1980s for commercialization by British marine manufacturer Simpson–Lawrence.
Danforth anchor.
American Richard Danforth invented the Danforth pattern in the 1940s for use aboard landing craft. It uses a stock at the crown to which two large flat triangular flukes are attached. The stock is hinged so the flukes can orient toward the bottom (and on some designs may be adjusted for an optimal angle depending on the bottom type). Tripping palms at the crown act to tip the flukes into the seabed. The design is a burying variety, and once well set can develop high resistance. Its lightweight and compact flat design make it easy to retrieve and relatively easy to store; some anchor rollers and hawsepipes can accommodate a fluke-style anchor.
A Danforth will not usually penetrate or hold in gravel or weeds. In boulders and coral it may hold by acting as a hook. If there is much current, or if the vessel is moving while dropping the anchor, it may 'kite' or 'skate' over the bottom due to the large fluke area acting as a sail or wing. Once set, the anchor tends to break out and reset when the direction of force changes dramatically, such as with the changing tide, and on some occasions it might not reset but instead drag.
The FOB HP anchor, designed by Guy Royer in Brittany in the 1970s, is a Danforth variant designed to give increased holding through its use of rounded flukes setting at a 30° angle.
The Fortress is an aluminum alloy Danforth variant which was designed by American Don Hallerberg. This anchor can be disassembled for storage and it features an adjustable 32° and 45° shank/fluke angle to improve holding capability in common sea bottoms such as hard sand and soft mud. This anchor performed well in a 1989 US Naval Sea Systems Command (NAVSEA) test.and in an August 2014 holding power test that was conducted in the soft mud bottoms of the Chesapeake Bay.
Bruce or claw anchor.
This claw-shaped anchor was designed by Peter Bruce from the Isle of Man in the 1970s. Bruce gained his early reputation from the production of large-scale commercial anchors for ships and fixed installations such as oil rigs. The Bruce and its copies, known generically as 'claws', have become a popular option for small boaters. It was intended to address some of the problems of the only general-purpose option then available, the plough. Claw-types set quickly in most seabeds and although not an articulated design, they have the reputation of not breaking out with tide or wind changes, instead slowly turning in the bottom to align with the force.
Claw types have difficulty penetrating weedy bottoms and grass. They offer a fairly low holding-power-to-weight ratio and generally have to be oversized to compete with newer types. On the other hand they have a good reputation in boulder bottoms, perform relatively well with low rode scopes and set fairly reliably. They cannot be used with hawsepipes.
Recent designs.
In recent years there has been something of a spurt in anchor design. Primarily designed to set very quickly, then generate high holding power, these anchors (mostly proprietary inventions still under patent) are finding homes with users of small to medium-sized vessels.
Permanent anchors.
These are used where the vessel is permanently or semi-permanently sited, for example in the case of lightvessels or channel marker buoys. The anchor needs to hold the vessel in all weathers, including the most severe storm, but needs to be lifted only occasionally, at most – for example, only if the vessel is to be towed into port for maintenance. An alternative to using an anchor under these circumstances, especially if the anchor need never be lifted at all, may be to use a pile driven into the seabed.
Permanent anchors come in a wide range of types and have no standard form. A slab of rock with an iron staple in it to attach a chain to would serve the purpose, as would any dense object of appropriate weight (for instance, an engine block). Modern moorings may be anchored by sand screws, which look and act very much like oversized screws drilled into the seabed, or by barbed metal beams pounded in (or even driven in with explosives) like pilings, or by a variety of other non-mass means of getting a grip on the bottom. One method of building a mooring is to use three or more conventional anchors laid out with short lengths of chain attached to a swivel, so no matter which direction the vessel moves, one or more anchors will be aligned to resist the force.
Mushroom anchor.
The mushroom anchor is suitable where the seabed is composed of silt or fine sand. It was invented by Robert Stevenson, for use by an 82-ton converted fishing boat, 'Pharos', which was used as a lightvessel between 1807 and 1810 near to Bell Rock whilst the lighthouse was being constructed. It was equipped with a 1.5-ton example.
It is shaped like an inverted mushroom, the head becoming buried in the silt. A counterweight is often provided at the other end of the shank to lay it down before it becomes buried.
A mushroom anchor will normally sink in the silt to the point where it has displaced its own weight in bottom material, thus greatly increasing its holding power. These anchors are only suitable for a silt or mud bottom, since they rely upon suction and cohesion of the bottom material, which rocky or coarse sand bottoms lack. The holding power of this anchor is at best about twice its weight until it becomes buried, when it can be as much as ten times its weight. They are available in sizes from about 10 lb up to several tons.
Deadweight anchor.
This is an anchor which relies solely on being a heavy weight. It is usually just a large block of concrete or stone at the end of the chain. Its holding power is defined by its weight underwater (i.e. taking its buoyancy into account) regardless of the type of seabed, although suction can increase this if it becomes buried. Consequently deadweight anchors are used where mushroom anchors are unsuitable, for example in rock, gravel or coarse sand. An advantage of a deadweight anchor over a mushroom is that if it does become dragged, then it continues to provide its original holding force. The disadvantage of using deadweight anchors in conditions where a mushroom anchor could be used is that it needs to be around ten times the weight of the equivalent mushroom anchor.
Screw anchor.
Screw anchors can be used to anchor permanent moorings, floating docks, fish farms, etc. These anchors must be screwed into the seabed with the use of a tool, so require access to the bottom, either at low tide or by use of a diver. Hence they can be difficult to install in deep water without special equipment.
Weight for weight, screw anchors have a higher holding than other permanent designs, and so can be cheap and relatively easily installed, although may not be ideal in extremely soft mud.
High-holding-power anchors.
There is a need in the oil-and-gas industry to resist large anchoring forces when laying pipelines and for drilling vessels. These anchors are installed and removed using a support tug and pennant/pendant wire. Some examples are the Stevin range supplied by Vrijhof Ankers. Large plate anchors such as the Stevmanta are used for permanent moorings.
Anchoring gear.
The elements of anchoring gear include the anchor, the cable (also called a rode), the method of attaching the two together, the method of attaching the cable to the ship, charts, and a method of learning the depth of the water.
Vessels may carry a number of anchors: bower anchors (formerly known as 'sheet anchors') are the main anchors used by a vessel and normally carried at the bow of the vessel. A kedge anchor is a light anchor used for warping an anchor, also known as 'kedging', or more commonly on yachts for mooring quickly or in benign conditions. A stream anchor, which is usually heavier than a 'kedge anchor', can be used for kedging or warping in addition to temporary mooring and restraining stern movement in tidal conditions or in waters where vessel movement needs to be restricted, such as rivers and channels. A Killick anchor is a small, possibly improvised, anchor.
Charts are vital to good anchoring. Knowing the location of potential dangers, as well as being useful in estimating the effects of weather and tide in the anchorage, is essential in choosing a good place to drop the hook. One can get by without referring to charts, but they are an important tool and a part of good anchoring gear, and a skilled mariner would not choose to anchor without them.
The depth of water is necessary for determining scope, which is the ratio of length of cable to the depth measured from the highest point (usually the anchor roller or bow chock) to the seabed. For example, if the water is 25 ft (8 m) deep, and the anchor roller is 3 ft (1 m) above the water, the scope is the ratio between the amount of cable let out and 28 ft (9 m). For this reason it is important to have a reliable and accurate method of measuring the depth of water.
A cable or rode is the rope, chain, or combination thereof used to connect the anchor to the vessel.
Chain rode is relatively heavy but resists abrasion from coral sharp rocks or shellfish beds which may abrade a pure rope warp. Fibre rope is more susceptible to abrasion on the seabed or obstructions, and is more likely to fail without warning.
Combinations of a length of chain shackled to the anchor, with rope added to the other end of the chain are a common compromise on small craft.
Anchor Warps.
The best rope for warps is nylon which is strong and flexible. Terylene(polyester) is stronger but has less flex. Both ropes sink, so avoid fouling other craft in crowded anchorages and do not absorb much water. Neither breaks down quickly in sunlight. Polypropylene or polythene are not suited to warps as they float and are much weaker than nylon and only slightly stronger than natural fibres. They breakdown in sunlight. Natural fibres such as manila or hemp are still used in third world nations but absorb much water, are relatively weak and rot. They do give good grip and are often very cheap. All anchors should have chain at least equal to the boats length. Some skippers prefer an all chain warp for added security in coral waters. Boats less than 8m typically use 6mm galvanized chain. 8-14m craft use 9mm chain and over 14m use 12mm chain. The chain should be shackled to the warp through a steel eye or spliced to the chain using a chain splice. The shackle pin should be securely wired. Either galvanized or stainless steel is suitable for eyes and shackles.
In moderate conditions the ratio of warp to water depth should be 4:1. In rough conditions it should be twice this with the extra length giving more stretch to resist the anchor breaking out. This means that small craft under 5m should carry at least 50m of 8mm warp. 5-8m craft 75-100m of 10mm warp. 8-14m should carry 100-125m of 12mm warp and over 16m the same length but 16mm warp.
Anchoring techniques.
The basic anchoring consists of determining the location, dropping the anchor, laying out the scope, setting the hook, and assessing where the vessel ends up. The ship will seek a location which is sufficiently protected; has suitable holding ground, enough depth at low tide and enough room for the boat to swing.
The location to drop the anchor should be approached from down wind or down current, whichever is stronger. As the chosen spot is approached, the vessel should be stopped or even beginning to drift back. The anchor should be lowered quickly but under control until it is on the bottom. The vessel should continue to drift back, and the cable should be veered out under control so it will be relatively straight.
Once the desired scope is laid out, the vessel should be gently forced astern, usually using the auxiliary motor but possibly by backing a sail. A hand on the anchor line may telegraph a series of jerks and jolts, indicating the anchor is dragging, or a smooth tension indicative of digging in. As the anchor begins to dig in and resist backward force, the engine may be throttled up to get a thorough set. If the anchor continues to drag, or sets after having dragged too far, it should be retrieved and moved back to the desired position (or another location chosen.)
There are techniques of anchoring to limit the swing of a vessel if the anchorage has limited room:
Using an anchor weight, kellet or sentinel.
Lowering a concentrated, heavy weight down the anchor line – rope or chain – directly in front of the bow to the seabed behaves like a heavy chain rode and lowers the angle of pull on the anchor. If the weight is suspended off the seabed it acts as a spring or shock absorber to dampen the sudden actions that are normally transmitted to the anchor and can cause it to dislodge and drag. In light conditions, a kellet will reduce the swing of the vessel considerably. In heavier conditions these effects disappear as the rode becomes straightened and the weight ineffective. Known as a 'anchor chum weight' or 'angel' in the UK.
Forked moor.
Using two anchors set approximately 45° apart, or wider angles up to 90°, from the bow is a strong mooring for facing into strong winds. To set anchors in this way, first one anchor is set in the normal fashion. Then, taking in on the first cable as the boat is motored into the wind and letting slack while drifting back, a second anchor is set approximately a half-scope away from the first on a line perpendicular to the wind. After this second anchor is set, the scope on the first is taken up until the vessel is lying between the two anchors and the load is taken equally on each cable.
This moor also to some degree limits the range of a vessel's swing to a narrower oval. Care should be taken that other vessels will not swing down on the boat due to the limited swing range.
Bow and stern.
(Not to be mistaken with the 'Bahamian moor', below.) In the 'bow and stern' technique, an anchor is set off each the bow and the stern, which can severely limit a vessel's swing range and also align it to steady wind, current or wave conditions. One method of accomplishing this moor is to set a bow anchor normally, then drop back to the limit of the bow cable (or to double the desired scope, e.g. 8:1 if the eventual scope should be 4:1, 10:1 if the eventual scope should be 5:1, etc.) to lower a stern anchor. By taking up on the bow cable the stern anchor can be set. After both anchors are set, tension is taken up on both cables to limit the swing or to align the vessel.
Bahamian moor.
Similar to the above, a 'Bahamian moor' is used to sharply limit the swing range of a vessel, but allows it to swing to a current. One of the primary characteristics of this technique is the use of a swivel as follows: the first anchor is set normally, and the vessel drops back to the limit of anchor cable. A second anchor is attached to the end of the anchor cable, and is dropped and set. A swivel is attached to the middle of the anchor cable, and the vessel connected to that.
The vessel will now swing in the middle of two anchors, which is acceptable in strong reversing currents, but a wind perpendicular to the current may break out the anchors, as they are not aligned for this load.
Backing an anchor.
Also known as 'tandem anchoring', in this technique two anchors are deployed in line with each other, on the same rode. With the foremost anchor reducing the load on the aft-most, this technique can develop great holding power and may be appropriate in 'ultimate storm' circumstances. It does not limit swinging range, and might not be suitable in some circumstances. There are complications, and the technique requires careful preparation and a level of skill and experience above that required for a single anchor.
Kedging.
'Kedging' or 'warping' is a technique for moving or turning a ship by using a relatively light anchor.
In yachts, a kedge anchor is an anchor carried in addition to the main, or bower anchors, and usually stowed aft. Every yacht should carry at least two anchors – the main or 'bower' anchor and a second lighter 'kedge' anchor. It is used occasionally when it is necessary to limit the turning circle as the yacht swings when it is anchored, such as in a very narrow river or a deep pool in an otherwise shallow area.
For ships, a kedge may be dropped while a ship is underway, or carried out in a suitable direction by a tender or ship's boat to enable the ship to be winched off if aground or swung into a particular heading, or even to be held steady against a tidal or other stream.
Historically, it was of particular relevance to sailing warships which used them to outmaneuver opponents when the wind had dropped but might be used by any vessel in confined, shoal water to place it in a more desirable position, provided she had enough manpower.
Club hauling.
Club hauling is an archaic technique. When a vessel is in a narrow channel or on a lee shore so that there is no room to tack the vessel in a conventional manner, an anchor attached to the lee quarter may be dropped from the lee bow. This is deployed when the vessel is head to wind and has lost headway. As the vessel gathers sternway the strain on the cable pivots the vessel around what is now the weather quarter turning the vessel onto the other tack. The anchor is then normally cut away, as it cannot be recovered.
In heraldry.
An anchor frequently appears on the flags and coats of arms of institutions involved with the sea, both naval and commercial, as well as of port cities and seacoast regions and provinces in various countries. There also exists in heraldry the 'Anchored Cross', or Mariner's Cross, a stylized cross in the shape of an anchor. The symbol can be used to signify 'fresh start' or 'hope'. In 1887, the Delta Gamma Fraternity adopted the anchor as its badge to signify hope. The Mariner's Cross is also referred to as St. Clement's Cross, in reference to the way this saint was martyred (being tied to an anchor and thrown from a boat into the Black Sea in 102). Anchored crosses are occasionally a feature of coats of arms in which context they are referred to by the heraldic terms 'anchry' or 'ancre'.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1359'>
Anbar (town)
Anbar () was a town in Iraq, at lat. 33 deg. 22' N., long. 43 deg. 49' E, on the east bank of the Euphrates, just south of the Nahr 'Isa, or Sakhlawieh canal, the northernmost of the canals connecting that river with the Tigris.
History.
Anbar was originally called Pērōz-Šāpūr or Pērōz-Šābuhr (from , meaning 'Victorious Shapur'; in 'prgwzšhypwhr'; in ), and became known as Perisapora to the Greeks and Romans. The city was founded ca. 350 by Shapur II, Sassanid king of Persia, and located in the Sassanid province of Asōristān. Perisapora was sacked and burned by Emperor Julian in April 363, during his invasion of the Sassanid Empire. The town became a refuge for the Arab, Christian, and Jewish colonies of that region. According to medieval Arabic sources, most of the inhabitants of the town migrated north to find the city of Hdatta south of Mosul.
Anbar was adjacent or identical to the Babylonian Jewish center of Nehardea (), and lies a short distance from the present-day town of Fallujah, formerly the Babylonian Jewish center of Pumbeditha ().
The name of the town was then changed to Anbar ('granaries'). Abu al-Abbas as-Saffah, the founder of the Abbasid caliphate, made it his capital, and such it remained until the founding of Baghdad in 762.
It continued to be a place of much importance throughout the Abbasid period.
Today.
It is now entirely deserted, occupied only by mounds of ruins, whose great number indicate the city's former importance.
</doc>
<doc url='http://en.wikipedia.org/wiki?curid=1360'>
Anazarbus
Anazarbus (med. Ain Zarba; mod. Anavarza) was an ancient Cilician city, situated in Anatolia in modern Turkey, in the present Çukurova (or classical Aleian plain) about 15 km west of the main stream of the present Ceyhan River (or classical Pyramus river) and near its tributary the Sempas Su.
A lofty isolated ridge formed its acropolis. Though some of the masonry in the ruins is certainly pre-Roman, the Suda's identification of it with Cyinda, famous as a treasure city in the wars of Eumenes of Cardia, cannot be accepted in the face of Strabo's express location of Cyinda in western Cilicia.
It was founded by Assyrians. Under the early Roman empire the place was known as Caesarea, and was the metropolis of Cilicia Secunda. It was the home of the poet Oppian. Rebuilt by the emperor Justin I after an earthquake in the 6th century, it became Justinopolis (525); but the old native name persisted, and when Thoros I, king of Lesser Armenia, made it his capital early in the 12th century, it was known as Anazarva.
Its great natural strength and situation, not far from the mouth of the Sis pass, and near the great road which debouched from the Cilician Gates, made Anazarbus play a considerable part in the struggles between the Byzantine Empire and the early Muslim invaders. It had been rebuilt by Harun al-Rashid in 796, refortified at great expense by the Hamdanid Sayf al-Dawla (mid-10th century) but was then sacked by the Crusaders and returned to the Armenians. Most of the remaining fortifications including the Curtain walls and Keep date to this period and were built by the Armenians. The Mamluk Empire of Egypt finally destroyed the city in 1374.
The present wall of the lower city is of late construction. It encloses a mass of ruins conspicuous in which are a fine triumphal arch, the colonnades of two streets, a gymnasium, etc. A stadium and a theatre lie outside the walls to the south. The remains of the acropolis fortifications are very interesting, including roads and ditches hewn in the rock; but beyond ruins of two churches, a gatehouse, and a fine keep built by Thoros I There are no notable structures in the upper town. For picturesqueness the site is not equalled in Cilicia, and it is worthwhile to trace the three fine aqueducts to their sources. A necropolis on the escarpment to the south of the curtain wall can also be seen complete with signs of illegal modern excavations.
A visit in December 2002 showed that the three aqueducts mentioned above have been nearly completely destroyed. Only small, isolated sections are left standing with the largest portion lying in a pile of rubble that stretches the length of where the aqueducts once stood. A powerful earthquake that struck the area in 1945 is thought to be responsible for the destruction.
A modest Turkish farming village (Dilekkaya) lies to the southwest of the ancient city. A small outdoor museum with some of the artifacts collected in the area can be viewed for a small fee. Also nearby are some beautiful mosaics discovered in a farmers field. Inquire at the museum for a viewing.
Anazarbus/Anavarsa was one of a chain of Armenian fortifications stretching through Cilicia. Sis Castle (modern Kozan, Adana) lies to the north while Tumlu Kale (Tumlu Castle) lies to the southwest and Amouda Castle to the Southeast.
</doc>

Interstellar Rift Alpha 25a Williston Nd

  • Copy lines
  • Copy permalink

Jarvis, J.C.; Wildeman, T.R.; Banks, N.G.

1975-01-01

The PlayStation 3 version scored similarly, with a 63.55% at GameRankings and 63 out of 100 at Metacritic. 's Thierry Nguyen gave the PS3 and X360 versions a C+, saying ' Transformers 2 is a significant improvement upon its terrible predecessor'. The PC version scored slightly lower at 56% and 58 out of 100, respectively. 's Chris Roper gave the game 6 out of 10, saying that it had a 'complete and utter lack of presentation'. Transformers revenge of the fallen wii game guide.

Samples of unaltered and metamorphosed LeadvilleLimestone (Mississippian, Colorado) were analyzed by neutron activation for ten rare-earth elements (REE). The total abundance of the REE in the least-altered limestone is 4-12 ppm, and their distribution patterns are believed to be dominated by the carbonate minerals. The abundances of the REE in the marbles and their sedimentary precursors are comparable, but the distribution patterns are not. Eu is enriched over the other REE in the marbles, and stratigraphically upward in the formation (samples located progressively further from the heat source), the light REE become less enriched relative to the heavy REE. The Eu anomaly is attributed to its ability, unique among the REE, to change from the 3+ to 2+ oxidation state. Whether this results in preferential mobilization of the other REE or whether this reflects the composition of the pore fluid during metamorphism is unknown. Stratigraphically selective depletion of the heavy REE may be attributed to more competition for the REE between fluid and carbonate minerals in the lower strata relative to the upper strata. This competition could have been caused by changes in the temperature of the pore fluid or to the greater resistance to solution of the dolomite in the lower parts of the formation than the calcite in the upper parts. ?? 1975.

Comments are closed.