Wednesday, August 22, 2012

Bellum se ipsum alet

La guerra se nutre a sí misma
Der Krieg ernährt den Krieg
The war will feed itself
--Cato Marcus Porcius ("the Elder", 234–149 BC)

La Primera Guerra Mundial en Color
Hasta la actualidad, la Primera Guerra Mundial siempre se habia visto como algo que sucedió en blanco y negro, pero no era real.
Usando imágenes de archivo inéditas de Rusia, Alemania, Francia, Italia, Estados Unidos y el Museo Imperial de Guerra británico, esta serie se ha coloreado usando la última tecnologia por ordenador, devolviendo por primera vez el color a la Primera Guerra Mundial, tal y como la experimentaron quienes lucharon y quienes sobrevivieron. Se necesitaron 5 meses y 490 tecnicos para colorear los archivos de blanco y negro, prácticamente inéditos.
Fue durante la Primera Guerra Mundial que se desarrolló el avión de combate, se introdujo el gas venenoso, se inventaron el tanque y los lanzallamas, y se extendió el uso de la ametralladora y de la artilleria pesada, lo que produjo una destrucción masiva.
Esta serie aporta una perspectiva única de los sucesos que ocurrieron entre 1914 y 1918 y que contempló cómo 65 millones de hombres tomaron las armas para luchar el uno contra el otro, Ilevando el mundo al caos. Esta es la guia definitiva de la Primera Guerra Mundial como nunca se ha visto.
  1. Miniatura1 Catastrofe 47:32
La guerra civil española

Con la proclamación de la Segunda República, el 14 de abril de 1931, se abría en España un período democrático cuyos principios quedaban recogidos en la nueva constitución republicana. Contando con un amplio apoyo popular, la República afrontó un ambicioso programa de reformas (el ejército, la educación, el campo, las autonomías) que encontró fuertes resistencias entre los militares, la Iglesia y la oligarquía. La victoria electoral del Frente Popular en febrero de 1936 dio inicio a un período de radicalización política y agitación social. En este clima se produjo, el 18 de julio de 1936, el levantamiento militar que los sublevados llamarían después Alzamiento Nacional. Empezaba la Guerra Civil Española.
Durante la Guerra Civil Española la República vivió bajo una crisis permanente, en la que las luchas intestinas no dejaron nunca de estar presentes. El enfrentamiento de los revolucionarios anarcosindicalistas y del POUM con los comunistas y las fuerzas republicanas debilitaron el gobierno constitucional. Mientras que los comunistas postergaban el inicio de una posible revolución al triunfo en la guerra, los anarquistas intentaban compaginar guerra y revolución. Al final, el curso de la guerra, que dio la victoria a los sublevados, frustró las expectativas de unos y otros.

La Guerra Civil Española : Episodio 3 - La Guerra De Los Idealistas La Guerra Civil Española : Episodio 3

La Guerra Civil Española : Episodio 5
The Second World War in Colour

"The Second World War in Colour" or simply "Colour of War" as it is released here in Belgium is a very good documentary about WWII and how it affected life around the world between 1940 and 1945. The entire documentary is a collection of authentic images, all in colour, of which a lot have been previously unreleased. Some images can be quite shocking at times and no doubt leave you with a bitter impression on how horrible war can be. The commentator also reads out a lot of letters or diary fragments from people who lived or died during World War II. Knowing this, you might think that the documentary in a whole would loose coherence but it's quite the opposite because even though "Colour of War" is mainly a collection of authentic images and letters it felt like everything fitted together very well.
About all the major events which happened during the period 1936-1945 are included. For example the German invansion in Poland and France, the bombing of London, Pearl Harbor, the confrontation between the American fleet and the German U-boats, Stalingrad, the American invasions of the Japanese islands, D-day, the Holocaust, Japanese Kamikazes, Hiroshima, ... it's all there.


World War II in HD
WWII in HD (known as World War II: Lost Films in the UK) is a 10-part American documentary television miniseries that originally aired from November 15 to November 19, 2009 on the History Channel. The program focuses on the firsthand experiences of twelve American service members during World War II, including an Army nurse, a member of the Tuskegee Airmen, a second generation Japanese American and prisoner of war, and an Austrian Jew immigrant. The twelve members recorded their time in both theaters and some had later interviews; found footage from the battlefield was paired with the stories of the twelve service members.
The episodes premiered on five consecutive days, with two episodes per day. The series is narrated by Gary Sinise and was produced by Lou Reda Productions in Easton, Pennsylvania, United States.


Tuesday, August 14, 2012

La creación del hombre por Chronos

Source
Explore our timeline of human evolution
Introduction: Human Evolution
by John Pickrell
September 2006
For similar stories, visit the Human Evolution Topic Guide
The incredible story of our evolution from ape ancestors spans 6 million years or more, and features the acquirement of traits from bipedal walkinglarge brainshairlessnesstool-makinghunting and harnessing fire, to the more recent development of languageartculture andcivilisation.
Darwin's The Origin of Species, published in 1859, suggested that humans were descended from African apes. However, no fossils of our ancestors were discovered in Africa until 1924, when Raymond Dart dug up the "Taung child" - a 3-million to 4 million-year-old Australopithecine.
Over the last century, many spectacular discoveries have shed light on the history of the human family. Somewhere between 12 and 19 different species of early humans are recognised, though palaeoanthropologists bitterly dispute how they are related. Famous fossils include the remarkably complete "Lucy", dug up in Ethiopia in 1974, and the astonishing "hobbit" species, Homo floresiensis, found on an Indonesian island in 2004.

Walking tall

Humans are really just a peculiar African ape - we share about 98% of our DNA with chimpanzees, our closest living relatives. Genetics and fossil evidence hint that we last shared a common ancestor 7 to 10 million years ago - even if we continued hybridising long after.
At around 6 million years ago, the first apes to walk on two legs appear in the fossil records. Despite the fact that many of these Australopithecinesand other early humans were no bigger than chimps and had similar-sized brains, the shift to bipedalism was highly significant. Aside from our large brain, bipedalism is perhaps the most important difference between humans and apes, as it freed our hands to use tools.
Bipedalism may have evolved when drier conditions shrank dense African forests. It must have allowed our ancestors to spot predators from further away, reach hanging fruit from the ground, and reduce exposure to sunlight. Evidence that Australopithecines walked upright includes analysis of the shape of their bones and fossilised footprints.
One famous member of the species Australopithecus afarensis is the remarkably complete fossil found by palaeaoanthropologist Donald Johanson in Hadar, Ethiopia in 1974. The 3.2-million-year-old fossil was named Lucy, after the Beatles' song Lucy in the Sky with Diamonds.
She stood around 1.1 metres (3.5 feet) tall and although she walked on two legs, she probably had a less graceful gait than us, since she walked with them bent.
Scientist's have modelled her gait using computers. Their characteristic long arms and curved fingers suggest that at least some Australopithecines were still good climbers.
Hundreds of other fossils of Australopithecus afarensis have now also been discovered. Other related early human species includeAustralopithecus africanus - such as the Taung child - 3.5-million-year-oldKenyanthropus platyops5.8-million to 4.4-million-year-old Ardipithecus, 5.8-million-year-old Orrorin tugenensis and 6 million year oldSahelanthropus tchadensis.

Tooled up

Australopithecines are thought to be the ancestors of Homo, the group to which our own species, Homo sapiens, belongs.
However, Australopithecines may also have given rise to another branch of hominid evolution - the vegetarian Paranthropus species. Around 2.7 million years ago, species such as Paranthropus bosei in east Africa evolved to take advantage of the dry grasslands. This included the development of enormous jaws and chewing muscles for grinding up tough roots and tubers.
By 2.4 million years ago, Homo habilis had appeared - the first recognisably human-like hominid to appear in the fossil record - which lived alongside P. bosei. Their bodies were around two-thirds the size of ours, but their brains were significantly larger than Australopithecines with a volume of about 600 cubic centimetres.
H. habilis had much smaller teeth and jaws than Paranthropus and was probably the first human to eat large quantities of meat. This meaty diet, acquired through scavenging, may have provided energy required to kick-start an increasing brain size. A mutation that weakened jaw muscles and gave our brains more space to grow may also lie behind the big brains we have today.
H. habilis - which means "handy man" - was also the first early human to habitually create tools and use them to break bones and extract marrow. This tool-making tradition, known as Oldowan, lasted virtually unchanged for a million years. Oldowan tools were made by breaking an angular rock with a "hammerstone" to give simple, sharp-edged stone flakes for chopping and slicing.
Despite their own increases in brain size, the Paranthropus group of species had become extinct by 1.2 million years ago. Some experts speculate that it was learning to work as a team against predators that gave Homo the edge.

Modern lookers

At around 1.65 million years ago, another early human, Homo ergaster, started to create tools in a slightly different fashion. This so-calledAcheulean tradition was the tool-making technology used for nearly the entire Stone Age, and practiced until 100,000 years ago. Acheulean tools, such as hand axes and cleavers, were larger and more sophisticated than their predecessors'. They may have been status symbols as well as tools.
Homo ergaster first appeared in Africa around 2 million years ago, and in many ways resembled us. Though they had brow ridges, they had lost the stoop and long arms of their ancestors. They may have been even more slender than us and were probably well-adapted to running long distances. Some experts believe that they were the first to sport largely hairless bodies, and to sweat, though another theory puts our hairlessness down to an aquatic phase.
One famous example of a more modern looking early human is theTurkana boy, a teenager when he died, 1.6 million years ago in Kenya. The shape of this fossil showed that the human pelvis had reached today's narrow proportions. Combined with the growing size of the human head and brain, this had far-reaching implications: human women nowneed help for a successful birth; and human babies are born earlier, and need a longer period of childhood care, than those of apes.
Meat-eating, however, may have allowed us to become early weaners.
H.ergaster may have been the first early human to leave Africa. Bones dated to around 1.75 million years ago have been found in Dmanisi in Georgia.
Shortly afterwards, Homo erectus appeared - the first early human whose fossils have been seen in large numbers outside of Africa. The first specimen discovered, a single cranium, was unearthed in Indonesia in 1891. H.erectus was highly successful, spreading to much of Asiabetween 1.8 and 1.5 million years ago, and surviving as recently as 27,000 years ago.
This species, with a brain volume of around 1000 cm3 would haveinteracted with modern humans. They may have been the first people totake to the seas and habitually hunt prey such as mammoths and wild horses, although there is some debate about this. They may also haveharnessed the use of fire and built the first shelters.
In 2004, the remains of a tiny and mysterious human species, that may have lived as recently as 13,000 years ago, was discovered on an Indonesian island. More bones of the "hobbit", or Homo floresiensis, wereuncovered in 2005. Some studies suggest it had an advanced brain and was unequivocally a separate species - but others argue that these people were modern humans suffering from a genetic disorder.

First Europeans

Early human fossil evidence from Spain, dating to around 780,000 years ago, points to the first known Europeans. Stone tools have also been found in England from around 700,000 years ago, attributed to Homo antecessor or Homo heidelbergensis.
More recently, 325,000-year-old H. heidelbergensis tracks were discovered preserved on an Italian volcano. Some of the biggest collections of hominid remains ever found are from Boxgrove in England and Atapuerca in Spain. Experts believe that these humans may have had ears equipped to detect nuances of human speech, whether or not they had simple language.
Some palaeoanthropologists believe that H. heidelbergensis evolved into our own species in Africa, whilst in Europe, the Neanderthals emerged as a separate species.
The Neanderthals were found across Europe, between 200,000 and 28,000 years ago. Though they still possessed pronounced brow ridges and were more thick-set, these people largely resembled us. They were as nimble-fingered, and matured at a similar age to us. Their brains were even slightly larger. It is not known if the Neanderthals had developedsimple language. But they did possess some aspects of our culture, such as ritual burying of the dead; creating art; using tools to attack each other; and complex hunting methods - as evidenced by a remarkable butchery site in the UK.
Experts disagree about whether the Neanderthals hybridised with humans or not, or if our arrival killed them. Plunging temperaturesfree trade and poor memory may all have contributed towards their extinction.

Out of Africa

There are several competing theories about how all these early humans are related to us today.
Most widely accepted is the "Out of Africa" hypothesis. This holds that ancient humans evolved exclusively in Africa, then spread across the world in two migration waves. The migration of H. erectus across Eurasia made up the first wave. Later, our own species evolved in Africa and fanned out in a second wave 200,000 years ago. These new people totally replaced H. erectus in Asia and the Neanderthals in Europe.
Advocates of the multiregional hypothesis instead believe that early humans started to leave Africa around 2 million years ago, and were never totally replaced by recent migrants. They believe these far-flung hominidsexchanged genes and interbred, slowly evolving into modern humans - in many places, simultaneously. Through gene flow, modern characteristicssuch as large brains gradually spread, it is suggested. Some fossils seem to support the multiregional hypothesis. H. erectus skulls in Asia, for example, have similarly flat cheek and nasal regions as people there today do.
Most - but not all - genetic evidence appears to back the Out of Africa hypothesis. There is surprisingly little variation in the mitochondrial DNA (mDNA) of different people today, which suggest that humans evolved recently from a small ancestral population. In addition, the variation of mDNA in Africans is greater than elsewhere, suggesting that people have been evolving there for longer.
We may all be descended from a single African woman - dubbedMitochondrial Eve - within the last 200,000 years. Male Y-chromosome DNA hints at a single male progenitor, too. Fewer than 50 people could have given rise to the entire population of Europe, experts believe.

Cultural revolution

The earliest anatomically modern humans are though to have arrived around 200,000 years ago. These fossils show a rounded braincase and flatter face. Their brains had reached modern proportions of about 1350 cm3Two skulls found in Ethiopia make up the oldest modern human remains known, at 195,000 years old.
Modern humans had made it to Asia by 90,000 years ago, Australia by60,000 years agoEurope and the Arctic by 40,000 years ago, and the Americas by 12,000 years ago.
Throughout history, tool use appears to have progressed slowly - once innovations were made, they lasted millions of years barely altering. But around 50,000 years ago something changed, and culture started to develop at a much more rapid rate.
Modern humans habitually began innovating new tools types, burying their deadcreating jewellery, developing sophisticated hunting techniques such as pitfall traps, using animal skins for clothing, decorating their bodies, and creating art and cave paintings. Although some of these traits appeared earlier, they seem to have only have been used sporadically until this time.
These changes may have been linked to increasing brain size or the way we thought - or could also be due to free trade, and the evolution oflanguage and communication. The dawn of human civilisation has been dated to around 30,000 years ago. The earliest agriculture and domestication of species is known only as recently as 10,000 years ago. The first human cities appeared in Mesopotamia around 4,000 years ago.
Are we still evolving today? If so, how will we evolve in the future? Some argue that humans have evolved little in the last 50,000 years - but other studies suggests that thousands of genes have changed since then.
We may even be on the verge of the next step of human evolution - the human global "superorganism".

Monday, August 13, 2012

Human sex from the inside out

Source August 2009
from Love and Sex Topic Guide
As should be obvious, the video is sexually explicit.
Don't look anything more! NO sigas mirando!


 by Righty
Why not censor this type of thing. Small children into science might be reading. I can't believe people need to spend funds researching this kind of thing when obviously for 6 billion of us (and growing), reproduction is no problem.

holy shit!!! read all comments


Video: MRI sex
New Scientist brings you sex as you've never seen it before: the first video of a couple having sex in an MRI scanner (see video). Just released, it was made from a series of images captured during an experiment some years ago. The study aimed to prove that it was possible to image male and female genitals during sex and to help better understand human anatomy.

The simplex algorithm (Hirsch's rule)

The algorithm that runs the world
13 August 2012 by Richard Elwes
Magazine issue 2877

Its services are called upon thousands of times a second to ensure the world's business runs smoothly – but are its mathematics as dependable as we thought?
YOU MIGHT not have heard of the algorithm that runs the world. Few people have, though it can determine much that goes on in our day-to-day lives: the food we have to eat, our schedule at work, when the train will come to take us there. Somewhere, in some server basement right now, it is probably working on some aspect of your life tomorrow, next week, in a year's time.
Perhaps ignorance of the algorithm's workings is bliss. The door to Plato's Academy in ancient Athens is said to have borne the legend "let no one ignorant of geometry enter". That was easy enough to say back then, when geometry was firmly grounded in the three dimensions of space our brains were built to cope with. But the algorithm operates in altogether higher planes. Four, five, thousands or even many millions of dimensions: these are the unimaginable spaces the algorithm's series of mathematical instructions was devised to probe.
Perhaps, though, we should try a little harder to get our heads round it. Because powerful though it undoubtedly is, the algorithm is running into a spot of bother. Its mathematical underpinnings, though not yet structurally unsound, are beginning to crumble at the edges. With so much resting on it, the algorithm may not be quite as dependable as it once seemed.
To understand what all this is about, we must first delve into the deep and surprising ways in which the abstractions of geometry describe the world around us. Ideas about such connections stretch at least as far back as Plato, who picked out five 3D geometric shapes, or polyhedra, whose perfect regularity he thought represented the essence of the cosmos. The tetrahedron, cube, octahedron and 20-sided icosahedron embodied the "elements" of fire, earth, air and water, and the 12-faced dodecahedron the shape of the universe itself.
Things have moved on a little since then. Theories of physics today regularly invoke strangely warped geometries unknown to Plato, or propose the existence of spatial dimensions beyond the immediately obvious three. Mathematicians, too, have reached for ever higher dimensions, extending ideas about polyhedra to mind-blowing "polytopes" with four, five or any number of dimensions.
A case in point is a law of polyhedra proposed in 1957 by the US mathematician Warren Hirsch. It stated that the maximum number of edges you have to traverse to get between two corners on any polyhedron is never greater than the number of its faces minus the number of dimensions in the problem, in this case three. The two opposite corners on a six-sided cube, for example, are separated by exactly three edges, and no pair of corners is four or more apart.
Hirsch's rule holds true for all 3D polyhedra. But it has never been proved generally for shapes in higher dimensions. The expectation that it should translate has come largely through analogy with other geometrical rules that have proved similarly elastic (see "Edges, corners and faces"). When it comes to guaranteeing short routes between points on the surface of a 4D, 5D or 1000D shape, Hirsch's rule has remained one of those niggling unsolved problems of mathematics - a mere conjecture.
How is this relevant? Because, for today's mathematicians, dimensions are not just about space. True, the concept arose because we have three coordinates of location that can vary independently: up-down, left-right and forwards-backwards. Throw in time, and you have a fourth "dimension" that works very similarly, apart from the inexplicable fact that we can move through it in only one direction.
But beyond motion, we often encounter real-world situations where we can vary many more than four things independently. Suppose, for instance, you are making a sandwich for lunch. Your fridge contains 10 ingredients that can be used in varying quantities: cheese, chutney, tuna, tomatoes, eggs, butter, mustard, mayonnaise, lettuce, hummus. These ingredients are nothing other than the dimensions of a sandwich-making problem. This can be treated geometrically: combine your choice of ingredients in any particular way, and your completed snack is represented by a single point in a 10-dimensional space.

Brutish problems

In this multidimensional space, we are unlikely to have unlimited freedom of movement. There might be only two mouldering hunks of cheese in the fridge, for instance, or the merest of scrapings at the bottom of the mayonnaise jar. Our personal preferences might supply other, more subtle constraints to our sandwich-making problem: an eye on the calories, perhaps, or a desire not to mix tuna and hummus. Each of these constraints represents a boundary to our multidimensional space beyond which we cannot move. Our resources and preferences in effect construct a multidimensional polytope through which we must navigate towards our perfect sandwich.
In reality, the decision-making processes in our sandwich-making are liable to be a little haphazard; with just a few variables to consider, and mere gastric satisfaction riding on the outcome, that's not such a problem. But in business, government and science, similar optimisation problems crop up everywhere and quickly morph into brutes with many thousands or even millions of variables and constraints. A fruit importer might have a 1000-dimensional problem to deal with, for instance, shipping bananas from five distribution centres storing varying numbers of fruit to 200 shops with different numbers in demand. How many items of fruit should be sent from which centres to which shops while minimising total transport costs?
A fund manager might similarly want to arrange a portfolio optimally to balance risk and expected return over a range of stocks; a railway timetabler to decide how best to roster staff and trains; or a factory or hospital manager to work out how to juggle finite machine resources or ward space. Each such problem can be depicted as a geometrical shape whose number of dimensions is the number of variables in the problem, and whose boundaries are delineated by whatever constraints there are (see diagram). In each case, we need to box our way through this polytope towards its optimal point.
This is the job of the algorithm.
Its full name is the simplex algorithm, and it emerged in the late 1940s from the work of the US mathematician George Dantzig, who had spent the second world war investigating ways to increase the logistical efficiency of the US air force. Dantzig was a pioneer in the field of what he called linear programming, which uses the mathematics of multidimensional polytopes to solve optimisation problems. One of the first insights he arrived at was that the optimum value of the "target function" - the thing we want to maximise or minimise, be that profit, travelling time or whatever - is guaranteed to lie at one of the corners of the polytope. This instantly makes things much more tractable: there are infinitely many points within any polytope, but only ever a finite number of corners.
If we have just a few dimensions and constraints to play with, this fact is all we need. We can feel our way along the edges of the polytope, testing the value of the target function at every corner until we find its sweet spot. But things rapidly escalate. Even just a 10-dimensional problem with 50 constraints - perhaps trying to assign a schedule of work to 10 people with different expertise and time constraints - may already land us with several billion corners to try out.
The simplex algorithm finds a quicker way through. Rather than randomly wandering along a polytope's edges, it implements a "pivot rule" at each corner. Subtly different variations of this pivot rule exist in different implementations of the algorithm, but often it involves picking the edge along which the target function descends most steeply, thus ensuring each step takes us nearer the optimal value. When a corner is found where no further descent is possible, we know we have arrived at the optimal point.
Practical experience shows that the simplex method is generally a very slick problem-solver indeed, typically reaching an optimum solution after a number of pivots comparable to the number of dimensions in the problem. That means a likely maximum of a few hundred steps to solve a 50-dimensional problem, rather than billions with a suck-it-and-see approach. Such a running time is said to be "polynomial" or simply "P", the benchmark for practical algorithms that have to run on finite processors in the real world.
Dantzig's algorithm saw its first commercial application in 1952, when Abraham Charnes and William Cooper at what is now Carnegie Mellon University in Pittsburgh, Pennsylvania, teamed up with Robert Mellon at the Gulf Oil Company to discover how best to blend available stocks of four different petroleum products into an aviation fuel with an optimal octane level.
Since then the simplex algorithm has steadily conquered the world, embedded both in commercial optimisation packages and bespoke software products. Wherever anyone is trying to solve a large-scale optimisation problem, the chances are that some computer chip is humming away to its tune. "Probably tens or hundreds of thousands of calls of the simplex method are made every minute," says Jacek Gondzio, an optimisation specialist at the University of Edinburgh, UK.
Yet even as its popularity grew in the 1950s and 1960s, the algorithm's underpinnings were beginning to show signs of strain. To start with, its running time is polynomial only on average. In 1972, US mathematicians Victor Klee and George Minty reinforced this point by running the algorithm around some ingeniously deformed multidimensional "hypercubes". Just as a square has four corners, and a cube eight, a hypercube in n dimensions has 2n corners. The wonky way Klee and Minty put their hypercubes together meant that the simplex algorithm had to run through all of these corners before landing on the optimal one. In just 41 dimensions, that leaves the algorithm with over a trillion edges to traverse.
A similar story holds for every variation of the algorithm's pivot rule tried since Dantzig's original design: however well it does in general, it always seems possible to concoct some awkward optimisation problems in which it performs poorly. The good news is that these pathological cases tend not to show up in practical applications - though exactly why this should be so remains unclear. "This behaviour eludes any rigorous mathematical explanation, but it certainly pleases practitioners," says Gondzio.

Flashy pretenders

The fault was still enough to spur on researchers to find an alternative to the simplex method. The principal pretender came along in the 1970s and 1980s with the discovery of "interior point methods", flashy algorithms which rather than feeling their way around a polytope's surface drill a path through its core. They came with a genuine mathematical seal of approval - a guarantee always to run in polynomial time - and typically took fewer steps to reach the optimum point than the simplex method, rarely needing over 100 moves regardless of how many dimensions the problem had.
The discovery generated a lot of excitement, and for a while it seemed that the demise of Dantzig's algorithm was on the cards. Yet it survived and even prospered. The trouble with interior point methods is that each step entails far more computation than a simplex pivot: instead of comparing a target function along a small number of edges, you must analyse all the possible directions within the polytope's interior, a gigantic undertaking. For some huge industrial problems, this trade-off is worth it, but for by no means all. Gondzio estimates that between 80 and 90 per cent of today's linear optimisation problems are still solved by some variant of the simplex algorithm. The same goes for a good few of the even more complex non-linear problems (see "Straight down the line"). "As a devoted interior-point researcher I have a huge respect for the simplex method," says Gondzio. "I'm doing my best trying to compete."
We would still dearly love to find something better: some new variant of the simplex algorithm that preserves all its advantages, but also invariably runs in polynomial time. For US mathematician and Fields medallist Steve Smale, writing in 1998, discovering such a "strongly polynomial" algorithm was one of 18 outstanding mathematical questions to be dealt with in the 21st century.
Yet finding such an algorithm may not now even be possible.
That is because the existence of such an improved, infallible algorithm depends on a more fundamental geometrical assumption - that a short enough path around the surface of a polytope between two corners actually exists. Yes, you've got it: the Hirsch conjecture.
The fates of the conjecture and the algorithm have always been intertwined. Hirsch was himself a pioneer in operational research and an early collaborator of Dantzig's, and it was in a letter to Dantzig in 1957 musing about the efficiency of the algorithm that Hirsch first formulated his conjecture.
Until recently, little had happened to cast doubt on it. Klee proved it true for all 3D polyhedra in 1966, but had a hunch the same did not hold for higher-dimensional polytopes. In his later years, he made a habit of suggesting it as a problem to every freshly scrubbed researcher he ran across. In 2001 one of them, a young Spaniard called Francisco Santos, now at the University of Cantabria in Santander, took on the challenge.
As is the way of such puzzles, it took time. After almost a decade working on the problem, Santos was ready to announce his findings at a conference in Seattle in 2010. Last month, the resulting paper was published in the Annals of Mathematics (vol 176, p 383). In it, Santos describes a 43-dimensional polytope with 86 faces. According to Hirsch's conjecture, the longest path across this shape would have (86-43) steps, that is, 43 steps. But Santos was able to establish conclusively that it contains a pair of corners at least 44 steps apart.
If only for a single special case, Hirsch's conjecture had been proved false. "It settled a problem that we did not know how to approach for many decades," says Gil Kalai of the Hebrew University of Jerusalem. "The entire proof is deep, complicated and very elegant. It is a great result."
A great result, true, but decidedly bad news for the simplex algorithm. Since Santos's first disproof, further Hirsch-defying polytopes have been found in dimensions as low as 20. The only known limit on the shortest distance between two points on a polytope's surface is now contained in amathematical expression derived by Kalai and Daniel Kleitman of the Massachusetts Institute of Technology in 1992. This bound is much larger than the one the Hirsch conjecture would have provided, had it proved to be true. It is far too big, in fact, to guarantee a reasonable running time for the simplex method, whatever fancy new pivot rule we might dream up. If this is the best we can do, it may be that Smale's goal of an idealised algorithm will remain forever out of reach - with potentially serious consequences for the future of optimisation.
All is not lost, however. A highly efficient variant of the simplex algorithm may still be possible if the so-called polynomial Hirsch conjecture is true. This would considerably tighten Kalai and Kleitman's bound, guaranteeing that no polytopes have paths disproportionately long compared with their dimension and number of faces. A topic of interest before the plain-vanilla Hirsch conjecture melted away, the polynomial version has been attracting intense attention since Santos's announcement, both as a deep geometrical conundrum and a promising place to sniff around for an optimally efficient optimisation procedure.
As yet, there is no conclusive sign that the polynomial conjecture can be proved either. "I am not confident at all," says Kalai. Not that this puts him off. "What is exciting about this problem is that we do not know the answer."
A lot could be riding on that answer. As the algorithm continues to hum away in those basements it is still, for the most part, telling us what we want to know in the time we want to know it. But its own fate is now more than ever in the hands of the mathematicians.

Edges, corners and faces

Since Plato laid down his stylus, a lot of work has gone into understanding the properties of 3D shapes, or polyhedra. Perhaps the most celebrated result came from the 18th-century mathematician Leonhard Euler. He noted that every polyhedron has a number of edges that is two fewer than the total of its faces and corners. The cube, for example, has six faces and eight corners, a total of 14, while its edges number 12. The truncated icosahedron, meanwhile, is the familiar pattern of a standard soccer ball. It has 32 faces (12 pentagonal and 20 hexagonal), 60 corners - and 90 edges.
The French mathematician Adrien-Marie Legendre proved this rule in 1794 for every 3D shape that contains no holes and does not cut through itself in any strange way. As geometry started to grow more sophisticated and extend into higher dimensions in the 19th century, it became clear that Euler's relationship didn't stop there: a simple extension to the rule applies to shapes, or polytopes, in any number of dimensions. For a 4D "hypercube", for example, a variant of the formula guarantees that the total number of corners (16) and faces (24) will be equal to number of edges (32) added to the number of 3D "facets" the shape possesses (8).
The rule derived by Warren Hirsch in 1957 about the maximum distance between two corners of a polyhedron was thought to be similarly cast-iron. Whether it truly is turns out to have surprising relevance to the smooth workings of the modern world (see main story).

2000 years of algorithms

George Dantzig's simplex algorithm has a claim to be the world's most significant (see main story). But algorithms go back far further.
c. 300 BC THE EUCLIDEAN ALGORITHM
From Euclid's mathematical primer Elements, this is the grandaddy of all algorithms, showing how, given two numbers, you can find the largest number that divides into both. It has still not been bettered.
820 THE QUADRATIC ALGORITHM
The word algorithm is derived from the name of the Persian mathematician Al-Khwarizmi. Experienced practitioners today perform his algorithm for solving quadratic equations (those containing an x2 term) in their heads. For everyone else, modern algebra provides the formula familiar from school.
1936 THE UNIVERSAL TURING MACHINE
The British mathematician Alan Turing equated algorithms with mechanical processes - and found one to mimic all the others, the theoretical template for the programmable computer.
1946 THE MONTE CARLO METHOD
When your problem is just too hard to solve directly, enter the casino of chance. John von Neumann, Stanislaw Ulam and Nicholas Metropolis's Monte Carlo algorithm taught us how to play and win.
1957 THE FORTRAN COMPILER
Programming was a fiddly, laborious job until an IBM team led by John Backus invented the first high-level programming language, Fortran. At the centre is the compiler: the algorithm which converts the programmer's instructions into machine code.
1962 QUICKSORT        
Extracting a word from the right place in a dictionary is an easy task; putting all the words in the right order in the first place is not. The British mathematician Tony Hoare provided the recipe, now an essential tool in managing databases of all kinds.
1965 THE FAST FOURIER TRANSFORM
Much digital technology depends on breaking down irregular signals into their pure sine-wave components - making James Cooley and John Tukey's algorithm one of the world's most widely used.
1994 SHOR'S ALGORITHM        
Bell Labs's Peter Shor found a new, fast algorithm for splitting a whole number into its constituent primes - but it could only be performed by a quantum computer. If ever implemented on a large scale, it would nullify almost all modern internet security.
1998 PAGERANK        
The internet's vast repository of information would be of little use without a way to search it. Stanford University's Sergey Brin and Larry Page found a way to assign a rank to every web page - and the founders of Google have been living off it ever since.

Straight down the line

When a young and nervous George Dantzig spoke about his new simplex algorithm at a conference of eminent economists and statisticians in Wisconsin in 1948, a rather large hand was raised in objection at the back of the room. It was that of the renowned mathematician Harold Hotelling. "But we all know the world is non-linear," he said.
It was a devastating put-down. The simplex algorithm's success in solving optimisation problems (see main story) depends on assuming that variables vary in response to other variables along nice straight lines. A cutlery company increasing its expenditure on metal, for example, will produce proportionately more finished knives, forks and profit the next month.
In fact, as Hotelling pointed out, the real world is jam-packed with non-linearity. As the cutlery company expands, economies of scale may mean the marginal cost of each knife or fork drops, making for a non-linear profit boost. In geometrical terms, such problems are represented by multidimensional shapes just as linear problems are, but ones bounded by curved faces that the simplex algorithm should have difficulty crawling round.
Surprisingly, though, linear approximations to non-linear processes turn out to be good enough for most practical purposes. "I would guess that 90 or 95 per cent of all optimisation problems solved in the world are linear programs," says Jacek Gondzio of the University of Edinburgh, UK. For those few remaining problems that do not submit to linear wiles, there is a related field of non-linear programming - and here too, specially adapted versions of the simplex algorithm have come to play an important part.
Richard Elwes is a visiting fellow at the University of Leeds, UK, and the author of Maths 1001: Absolutely everything that matters in mathematics(Quercus, 2010)