Who Invented Zero And Why?
The concept of zero is something that we take for granted. We have various other names for it, such as nought and nil, and even slang terms like zip, zilch and nada. It is also sometimes, confusingly, referred to as ‘oh’ when saying telephone numbers. In tennis we say ‘love’ instead of zero and in cricket ‘duck’.
It’s hard to imagine maths, language or life without zero, yet it hasn’t always been around. Who invented zero, and why?
Zero’s origins
It’s likely that the origins of zero go as far back as ancient Mesopotamia and spaces were used by Sumerian scribes to show absences in number columns around four thousand years ago.
However the first time we have a record of a symbol that resembles zero is in Babylon during the third century BC. Although the Babylonians used a number system based around values of sixty, rather than the decimal system we use today, they did use a symbol consisting of two small wedges to show magnitudes (like we would use zero to differentiate between tens, hundreds, thousands, etc) – the function was purely as a placeholder though, it didn’t have a value of its own.
The Mayans also used something similar around 350 AD, using a zero marker in their calendars, which was developed independently of the Babylonians.
Surprisingly, the best known mathematicians among the Ancient Greeks neither had a name for zero nor did they use a placeholder like the Babylonians.
Zero as a symbol and a value
The first time we have a record of zero being understood as both a symbol and as a value in its own right was in India. About 650 AD the mathematician Brahmagupta, amongst others, used small dots under numbers to represent a zero.
The dots were known as ‘sunya’, which means empty, as well as ‘kha’, which means place. So their version of zero was seen as having a null value as well as being a placeholder.
Brahmagupta was also the first to show how zero could be reached via addition and subtraction and the results of operations with zero. He did however get it wrong when it came to dividing by zero!
We know about this largely because of the Bakhshali manuscript, an ancient Indian text filled with mathematics and text in Sanskrit form. It was discovered by a local farmer in 1881 and was originally thought to be from the ninth century but a few years ago carbon dating showed the oldest pages to be from sometime between 224 AD to 383 AD.
The zero then made its way from India to China and then back to the Middle East. About 773 AD the mathematician Mohammed ibn-Musa al-Khowarizmi was the first to work on equations that were equal to zero (now known as algebra), though he called it ‘sifr’. By the ninth century the zero was part of the Arabic numeral system in a similar shape to the present day oval we now use.
Zero makes its way to Europe
Zero finally made its way to Europe for the first time when Spain was conquered by the Moors, with translations of Al-Khowarismi’s work appearing in England around the twelfth century.
The Italian mathematician Fibonacci played a large part in introducing zero to the mainstream in the 1200s, and it was adopted by Italian merchants and German bankers for accounting purposes.
The use of zero was outlawed for a while as governments were suspicious of Arabic numerals and the ease with which one symbol could be changed into another, but was used in encrypted messages, hence the word cipher (meaning code) being derived from the Arabic ‘sifr’.
The modern zero
René Descartes, Isaac Newton and Gottfried Leibniz (inventor of calculus) continued to develop zero’s place in mathematics and it became the familiar concept that it is today.
Without zero, civilisation wouldn’t have progressed anywhere near as much as it has. Calculus was born by working with numbers as they approach zero and without it we wouldn’t have modern day physics, engineering, computing or indeed a large part of finance and economics.