Skip to content

Geographical oddities

In his book Asking for trouble (on which Richard Attenborough’s film Cry freedom was later based) the dissident white South African journalist Donald Woods describes how he and his family escaped from his country’s dreaded secret police – who seemed increasingly determined to kill them – by fleeing across the border into the small hilly state of Lesotho (pronounced by South Africans as ‘Le-SOO-too’). The odd thing about this escape is that the family were not safe even then – for Lesotho is an enclave, and apart from the micro-states of San Marino and Vatican City (both in Italy) it is the only country in the world that is entirely surrounded by another one. Unfortunately for the Woods family, the other country is South Africa.

So getting away was clearly going to be a problem. Although the family’s home in the city of East London on the south-east coast was under permanent police surveillance, they had managed by various subterfuges to leave it without arousing suspicion; but their cover would not hold for long. The nearest safe territory was Botswana, whose capital Gaborone is just north of the South African border; and the distance between Gaborone and Lesotho’s capital Maseru (just east of the South African border) is almost 700 km. Since Lesotho is landlocked, leaving by sea was not an option; and once the escape was discovered (as they soon learned it had been from radio reports) the overland route, already very risky, could not be used either. So that meant flying out.

They would be in South African airspace (and hence at risk of being forced down by hostile air force jets) for almost the whole flight – but it was surely safer than being on South African soil for far longer (as a car trip, complete with strict border checks, would have required). All aircraft leaving Lesotho for other countries were required to land at South Africa’s Bloemfontein airport, 150 km west of Maseru, before continuing their journey – clearly not an option either. When challenged by the Bloemfontein control tower, the pilot of the chartered light aircraft had simply replied that he was carrying ‘seven holders of United Nations passports’, without mentioning their names (the passports had been issued to them by the Lesotho government, which was prepared to risk its relations with South Africa at least to that extent). Looking back, it’s hard to believe the relatively slow plane was not forced down en route, especially after ignoring the rule about landing at Bloemfontein; but the fugitives made it safely to Botswana (perhaps it was the UN passports that did the trick, by making South Africa wary of risking an international incident), and Woods went on to make a name for himself with his condemnatory books and reports on the apartheid regime.

Again looking back, it’s hard to see why Lesotho – the former British colony of Basutoland, which had always been a separate entity – had not been absorbed into South Africa long before, like other African kingdoms in the region such as Zululand. Another such kingdom, Swaziland, has survived intact further north-east; but it is not an enclave like Lesotho, for it also borders on the former Portuguese colony of Mozambique.

Whatever the reasons for Lesotho’s curious survival as an independent state, its geographical oddity helped save Donald Woods and his family from persecution by giving them some badly needed breathing space.

Another such oddity is the border between Thailand and Myanmar (a.k.a. Burma) on the skinny Malay Peninsula, which at its narrowest point is not much more than 50 km wide. For much of the way it is divided between the two countries, which each have a strip of varying width. Myanmar generally gets the best of the deal, and at one point the distance between the border and the Thai coast to the east shrinks to barely 10 km; but then Thailand expands across the full width of the peninsula, which itself widens in the direction of Malaysia, leaving Myanmar to tail off about 150 km north of the famous Thai coastal resort of Phuket.

It isn’t as if Myanmar’s presence on the peninsula gives it direct land access to Malaysia – there’s a gap of over 300 km, with Thailand holding all the space in between. And, in view of the long-standing enmity between the two countries, I can only assume they both have to spend a great deal of money on keeping this border secure.

You can’t help wondering why Myanmar never overran the narrow strip of Thai territory, cutting off the southern part of it from the main body of the country, and perhaps even occupying it entirely; Phuket would then have been in Myanmar rather than Thailand and, given the former military dictatorship’s hostility to foreign visitors and all things Western, would never have become a prosperous resort in the first place. You also can’t help wondering why Myanmar has bothered to hold on to its own relatively narrow strip of the peninsula at all – it does not seem to be a particular source of oil or gemstones, which are two of the impoverished country’s main exports. But perhaps this has to do with countries’ apparently instinctive reluctance to give up so much as an inch of their territory, whatever benefits that might bring them.

But now Central Asia. Take a look at the maps of four of the world’s seven ‘stans’: Uzbekistan, Tajikistan, Kyrgyzstan and Afghanistan, which have borders with two or even all three of the rest. The first three countries meet in a curious snail-like whorl, with narrow, jagged bits of territory protruding into those of their neighbours; at some points the protrusions are no more than 30 km wide, and Uzbekistan even has four tiny enclaves inside the territory of neighbouring Kyrgyzstan (which luckily also speaks a Turkish-based language, whereas the languages spoken in the other two countries are based on the completely different Persian). I can’t begin to imagine how this mess is policed and administered; but there seem to have been no attempts to tidy it up.

As for Afghanistan, it has a fairly regular-looking border – apart from a curious strip of land known as the Wakhan Corridor, which at its narrowest is barely 10 km wide and bisects what would otherwise seem to be the natural border between Tajikistan and Pakistan (as well as Kashmir, which Pakistan and India have fought over for decades). Instead, like a rude tongue stuck out at all its neighbours, the 200-km corridor gives Afghanistan a short section of border with, of all places, China. Only 12,000 people live in it, but Wakhan survives regardless, and policing it must again be a very costly and time-consuming business for all four of the countries concerned. The mediaeval Italian explorer Marco Polo is believed to have travelled through the corridor on his way east, and the Afghan government has repeatedly asked China to open the border at the far end; but, fearing (perhaps rightly) the sociopolitical consequences for its western regions with their large numbers of Muslim rebels, China has just as repeatedly refused. Conundrum after conundrum, and no solution seems to be in sight.

But geographical oddity can have its upsides – in wine, for instance, and that’s a subject dear to my heart (though perhaps not my liver – but I’ve never heard a song called ‘Listen to your liver’). From the 1860s onwards an insect known by the originally Greek name phylloxera (‘dry-leaf’) came close to destroying Europe’s ancient vineyards. There is still no cure, and the only answer has been to graft vines onto North American ‘rootstock’, which has so far proved immune to the little creature’s depredations. Some types of grape were believed to have been lost forever. One was Carménère.

But DNA research recently allowed an unexpected discovery. Some of the vines grown in Chile under the name Merlot turned out not to have been Merlot at all, but the descendants of a batch of Carménère that had been exported from France before phylloxera struck; and Chile is still almost the only part of the world where the insect has not yet made an appearance.

How come? The consensus is that Chile’s geographical oddity has protected its vines from infection. It is a very narrow and very long country (on average just 175 km wide, and 4,250 km long), and is protected on all sides by formidable climatic barriers: the Atacama desert (the world’s driest region) to the north, the Andes (the world’s second-highest mountains, with permanent snow on their peaks) to the east, the Pacific (the world’s vastest ocean) to the west, and the Antarctic (the world’s coldest region) to the south. All this seems to have kept phylloxera out – although who knows what the long-term effects of climate change and mutation will be?

In any case, Carménère has now become ‘the Chilean wine’, for it was believed to be extinct and now survives only there, without the help of grafting. It is not a particularly tasty wine, but that may be because it hadn’t been cultivated for its specific qualities – on the assumption that it was just a kind of Merlot.

Far more of our world depends on geographical oddities than we might think. Nature, not nurture.

Advertisements

Middle East?

For most of the 20th century and on into the 21st, the words ‘Middle East’ have been synonymous with seemingly intractable conflict: the defeat of Turkey’s Ottoman empire in the First World War and the resulting loss of its colonial possessions in the Levant and North Africa; the duplicitous manoeuvrings of the colonial powers Britain and France which, after promising Arabs their freedom if they would rise in revolt against the Turks, proceeded instead to divide up the spoils between them (which is how Iraq, Kuwait, Jordan and Palestine ended up as British colonies – at best, client states – and Syria and Lebanon as French ones; the misconceived decision by the British government in 1917 to allow mass Zionist immigration into Palestine (the biblical ‘holy land’) with the express intention of creating a homeland for the dispersed Jews of the world, which sowed the seeds for fierce enmity between the dispossessed Arab population and the ever more numerous newcomers, who eventually founded their own state on land that had once belonged to other people (and that the British were surely not entitled to give away) – the Zionists variously claimed that this was a ‘land without people for a people without land’ (while knowing perfectly well that it was inhabited) or that they had a right to it because Jews had lived there thousands of years before, and hence that the present inhabitants were temporary intruders (but by that logic the whole map of the world could be redrawn); bitter warfare between the Muslim and Christian populations of Lebanon, culminating in the devastation of its capital Beirut; the overthrow of the Shah of Iran by the fundamentalist Islamic clergy that led opposition to his oppressive rule, but then established a regime that was, if anything, more oppressive and trampled on hard-won human rights, especially those of women; the rise of dictatorial political leaders in Iraq and Syria, with belated, disorganised and ill-informed military intervention by Western powers to ‘restore democracy’ (whereas it had never existed there); the destabilisation or outright collapse of these regimes without anything useful being put in their place; and the emergence of assorted rebel groups, including Islamic fundamentalists whose ugly violence has done much to bring Islam into disrepute around the world; all this against the background of the discovery around 1900 of the world’s most extensive oil reserves, making the region a magnet for foreign countries eager to maintain control of what has become the motor of their economies, and making some local states (or rather, their social elites) rich beyond the dreams of avarice. In the midst of all this, the Kurdish people, divided against their will over four different countries (Turkey, Syria, Iraq and Iran) that all categorically refuse to grant them even autonomy, let alone independence, have found themselves joining forces with whoever seems most likely to help them achieve their goals, which have come no nearer despite decades of fighting.

In short, if ever the world has had a ‘powder keg’ (at least in modern times), this is surely it.

But how did the region come to be known as the ‘Middle East’? ‘East’ as seen by whom? And why ‘Middle’?

The obvious answer to the ‘East’ question is ‘as seen by the West’ – i.e. by the foreign powers that have done so much to damage this part of the world. But even in its dominant language, Arabic, it is known as الشرق الأوسط (ash-sharq al-awsaṭ), in Persian as خاورمیانه (khāvar-e miyāneh, proof if ever it were needed that Arabic and Persian are quite different languages, even though they use the same basic alphabet), in Hebrew as המזרח התיכון (hamizrach hatikhon) and in Turkish as Orta Doğu – all of which literally mean ‘Middle East’. It has been suggested that this terminology is yet another result of prolonged Western influence.

Yet in the West itself it is not universally known as the ‘Middle East’. The usual German term is Naher Osten, and some Eastern European languages still use similar terms that mean ‘Near East’. The French equivalent Proche-Orient is also still common, alongside Moyen-Orient. What’s all this about?

Until the turn of the 20th century, the ‘East’ as seen from the ‘West’ was essentially divided into three parts: ‘Near’, ‘Middle’ and ‘Far’ (or, in some Romance languages, ‘Extreme’ – for instance, Extrême-Orient in French). The meaning of ‘Far East’ has never really been in doubt: the ‘far end’ of Asia, comprising China, Japan, South-East Asia down to Indonesia, and the easternmost parts of Russia (which in fact extend further east than all the rest). What it does not normally include is ‘South Asia’: India, Pakistan and all their immediate neighbours (except Iran).

Conversely, the inhabitants of China and Japan have long referred to countries west of the Arab world as 泰西 (tàixī in Chinese, taisei in Japanese, but written with the same two characters in both languages) – the ‘Far West’. It seems this name may have been invented by Jesuit missionaries as a counterpart to the European notion of the ‘Far East’; but the idea of China being the ‘East’ was well established under Mao Zedong, who famously said ‘The east wind will prevail over the west wind’ (i.e. communist China and its Asian allies will prevail over the capitalist west), and the Chinese national anthem during the 1960s ‘Cultural Revolution’ was called ‘The East is red’.

But the meaning of both ‘Middle East’ and ‘Near East’ has been far less clear – in fact, until the 20th century the term ‘Middle East’ scarcely existed. And the main reason for this was colonialism: Turkish, British and Russian. Up to the 19th century, travellers setting out eastwards from, say, Paris or Rome would first of all find themselves crossing territory held by the Turkish Ottoman empire, from what is now Romania to what is now Iraq; and for simplicity’s sake the whole of the empire was often referred to in the West as the ‘Near East’. Given that at the height of their power the Turks only just failed to capture Vienna, this part of the East was very near indeed! If we briefly disregard Iran, the travellers would then cross British India, which then included what are now Pakistan and Burma (Myanmar); and beyond that was South-East Asia, the start of the ‘Far East’. Everything in the middle effectively belonged to Britain or, further north, to Russia; the two countries ended up fighting over Afghanistan, which had the misfortune to border on both (the fighting between the two great powers in Central Asia was boyishly referred to back then as the ‘Great Game’, with typically arrogant disregard for local people’s wishes or needs). So there was a ‘Near East’ and a ‘Far East’ – but not yet a ‘Middle East’.

In the course of the 19th century, Turkey’s colonies west of the Bosphorus (‘Turkey-in-Europe’) gained their independence one after the other: Serbia, Greece, Romania, Bulgaria, Montenegro and Albania. Bosnia was snapped up by a still greedy Austro-Hungarian empire (unaware it was about to share the same fate as its Turkish adversary); and what is now Macedonia was repeatedly, and bloodily, fought over by all its newly independent neighbours. The ‘Near East’ was no longer so near (although as late as 1908 Bosnia was still officially in Turkish hands). Then came the First World War, which ended in catastrophic Turkish defeat and the collapse of the Ottoman empire. Apart from a toe-hold that included the country’s then capital, Istanbul, ‘Turkey-in-Europe’ ceased to exist. With Turkey now essentially reduced to its Asian homeland, Anatolia, the whole notion of a ‘Near East’ ceased to have any meaning; and the term fell into disuse.

But its ghost survived. Turkey was now simply Turkey; its former European colonies were all independent states; Britain and France controlled most of its former colonies in the Levant and North Africa, from Syria to Morocco (a latecomer in the scramble for colonies, Italy had grabbed the poorest coastal region, Libya, but would lose it again after its defeat in the Second World War); what was now the ‘East’ was no longer ‘near’; but a term was still needed to describe the oil-rich, and strategically ever more important, region between the Mediterranean and the Indian Ocean.

Enter ‘Middle East’ – a confusing term, for a ‘Middle’ East implied the existence of other ‘Easts’ around it. Yet it soon became, and would remain, the name of choice for the region; and from English its use has been spreading to other languages, including German (in which reports hastily translated from English-language media increasingly refer to Mittlerer Osten rather than Naher Osten); Slavic languages are now also tending to use literal translations of ‘Middle East’ instead of the once customary ‘Near East’.

Of course, all this may seem semantic hair-splitting; but the introduction and persistence of this misleading name, even in Arabic and other local languages, surely bears witness to the West’s abiding influence on the region, and the world in general. And, despite China’s growing economic and political clout, it refers to the Middle East as 中东 (zhōngdōng). The word 中 (zhōng) appears in the Chinese name for ‘China’: 中华 (zhōngguó), literally ‘Middle Kingdom’; and the word (dōng) appears in the Chinese name of that 1960s national anthem: 东方红 (dōngfāng hóng), ‘The East is red’. So – even from China’s extreme-eastern vantage point – zhōngdōng again means ‘Middle East’.

Why not some equivalent of America’s ‘Mid-West’ – 西 (zhōng), perhaps? – to reflect the current shift in geopolitical power? I’ve borrowed 西 () from the term for ‘Far West’ mentioned earlier: 泰西 (tàixī). And when I google Chinese equivalents of American names that include ‘Mid-West’, these are the very two characters I find: 西.

Maybe Chinese isn’t so difficult after all….

Highland hero

In March 1903, within three weeks of his 50th birthday, the Gaelic-speaking son of a poor Scottish Highland farmer (‘crofter’) blew his brains out with a pistol in a Paris hotel room. The reason? A classic Victorian scandal, with its full share of rumour, hypocrisy and refusal to face reality.

Eachann MacDhòmhnaill (or, as he would become known in English, Hector MacDonald) joined the Gordon Highlanders regiment in 1871, at the age of 17, and rose through the ranks of the British army to end up, shortly before his death, as a major-general (one of the very few men to do so, at least back in those days, on merit alone). But his lowly background, plus the fact that he was Scottish rather than English, may in the end have been his undoing.

He saved the supremely arrogant and incompetent General Kitchener’s bacon with a skilful manoeuvre at the 1898 Battle of Omdurman (in modern-day Sudan); and although Kitchener was typically given full credit for the bloody victory (in which 10,000 Sudanese soldiers but only 47 British ones were killed, and the nasty ‘dum-dum’ bullets were used by the British side to horrifying effect – they expanded inside the target’s body, causing maximum tissue damage and shock), MacDonald was also widely praised (especially in Scotland) for his action. The following year – when the use of dum-dums in international warfare was banned by the Hague Convention – the Boer War broke out in South Africa; and MacDonald once again distinguished himself. He was knighted and nicknamed ‘Fighting Mac’, and at least one poem was written in his honour.

By the time the Boer War ended in 1902, with another British victory, MacDonald had been rewarded with a posting to Britain’s Indian colony, and then – just a year before his suicide – to neighbouring Ceylon (modern-day Sri Lanka), as commander-in-chief of the imperial troops there. And it was in Ceylon that things in his life went badly wrong.

As so often, the facts are disputed. MacDonald’s many supporters continue to claim the whole thing was a fabrication born of British upper-class snobbery. The fellow simply wasn’t ‘one of us’, and could not be permitted to penetrate the ranks of the establishment; worse still, he was a Scot, presumably with an audible Highland accent. One way or another, he had to be got rid of – and he was, in the tried-and-tested British manner: sex.

Rumours soon began to circulate among Ceylon’s colonial elite that MacDonald ‘did not like ladies’; and specific accusations soon followed. He was allegedly having sex with the teenage sons of a local dignitary; and another witness had ‘surprised’ him in a railway carriage ‘with four Sinhalese boys’ (the Sinhala were, and are, the majority population on the island). Just how old these various young males were seems not to have been recorded; and in any case that hardly mattered, for – unlike today – sex with someone ‘under-age’ was not condemned on those grounds alone.

The problem – as in the then very recent Oscar Wilde case – was not so much one of age as of sexual orientation, and above all social status. If the teenagers had not been the sons of someone influential, few people would have cared. And the other difficulty was, of course, homosexuality: peccatum illud horribile, inter Christianos non nominandum (‘that horrible crime not to be named among Christians’), as it was coyly referred to at the time. Some dalliance with a native girl of lowly descent would have aroused little comment; but MacDonald had broken all the established rules.

Or had he? His enemies would have crafted their accusations carefully: sexual relations between males, involving the families of influential people, were in many people’s minds the ultimate horror. At this historical distance we simply can’t know for sure if Hector MacDonald was gay, or bisexual. No incriminating love letters have been discovered – and of course they too could be fabricated. But we do have to consider other background details.

At the time of the accusations MacDonald was thought to be unmarried – an unusual state of affairs for a man approaching 50 and in such a position, in Victorian Britain (or rather Edwardian Britain, for the aging queen had died in 1901). After he committed suicide it transpired that he had in fact been married for 19 years, and even had a son – but had only seen his wife four times during that time. Other leading military men’s wartime correspondence with their wives has survived; MacDonald’s has not. You can’t help wondering if this was a ‘marriage of convenience’. Just a year after the wedding, Britain introduced its draconian new legislation against homosexuality which was to blight millions of men’s lives for the next 80 years. Such was the prevailing mentality.

If the accusations had been false, why did MacDonald do so little to rebut them, if only by revealing his marriage and the birth of his son? For whatever reasons, he did not. Before long he was sent home to London; and it has been suggested that the new king told him the best thing he could do was shoot himself. In short, no-one seems to have been in much doubt that the accusations were true. The main concern was to avoid a public scandal that would tarnish the army’s, and the British Empire’s, image.

The governor of Ceylon eventually advised him to return and ‘clear his name’ at a court-martial – amazingly, despite the change in British law, what he had supposedly done was not a criminal offence in the British colony Ceylon. Then, while staying at a hotel in Paris, he read in the newspapers that ‘serious charges’ were to be laid against him; and after breakfast he went up to his room and shot himself through the head.

Once again, you can’t help thinking that he knew he was unlikely to be acquitted – and hence that the accusations were true. MacDonald had been fêted as a military hero, and could surely have mounted a defence with strong public support; but there were evidently enough witnesses to bring him down if the facts ever came out.

Whatever the truth of the matter, Eachann MacDhòmhnaill was the victim of social snobbery and homophobic prejudice; though today he would probably have been accused, and convicted, of paedophilia. But in those days that wasn’t a crime.

Perhaps worse still, he was assumed by his presumed homosexuality to be a risk to military discipline. The grounds for such an accusation have never been proven; but being sexually unorthodox has long been deemed a valid ground to exclude people from the armed forces. Only last week the increasingly bizarre American president Donald Trump stated that transgender people should never be allowed to serve in the US military. Why? Simply because he needs something to distract attention from his many other domestic and international policy blunders – and sexual minorities are always a welcome target. As a businessman, Trump knows that freaking out the stupids – which America is sadly only too full of – is a fail-safe strategy.

The problem is that Hector MacDonald was a paragon of military courage – so how could he possibly be homosexual? Once again, Victorian refusal to face reality.

We gay people are quite as brave – or quite as cowardly – as anyone else. No more, and no less. And who we’re sexually attracted to makes no fucking difference.

 

Seeing red

On the eve of the First World War, most of the combatant armies had neutral-coloured uniforms: the Germans, Austro-Hungarians, Italians, Serbs and Bulgarians wore various shades of grey, the British (and other British Empire troops), Americans, Russians, Turks, Belgians, Greeks and Japanese various shades of buff or khaki, the Poles and Portuguese dull blue, and the Romanians dull green. The only exceptions were the French.

I first became aware of this when I recently saw (for the second time in my life – the first was just after it came out in 1969) Richard Attenborough’s film of the stage play Oh! what a lovely war, which recaptures the mood of the First World War through the often ironic but often also heart-rending British songs written at the time. Early in the film we see a French army officer, finely played in wonderfully French-accented English by the actor Jean-Pierre Cassel – and to my amazement he is dressed in a sky-blue jacket and bright-red trousers, which were still the official French army uniform at the outbreak of war.

There was a time when armies went to war dressed as colourfully as possible – part of the myth of war as a glorious thing that all young men should be proud to take part in. The uniforms at the Battle of Waterloo in 1815 were a splendid mixture of primary colours, gold and silver braid and dazzling plumes and pompons that surely caught the young women’s fancy as their menfolk marched by in serried ranks, and attracted those menfolk into the army in the first place. Another, more serious advantage of all this colour was that it enabled soldiers to tell who were their comrades and who their enemies – which was very useful on the battlefields of the time. In those days gunpowder emitted tell-tale puffs of smoke as it ignited in rifles or cannon, and soldiers generally fought in much the same serried ranks as they had marched off in, so being conspicuous was not yet thought of as a drawback; and a mass of colour – red, in particular, is now known to make people feel more aggressive – was a good way to strike fear into your foes.

As late as 1879, British soldiers were still going into battle dressed in the bright-red jackets that had earned them the nickname ‘redcoats’ (as they were known to the Americans during the War of Independence back in the 1770s). Until very recently the red colour had been made using an ancient dyestuff from the madder plant, which tended to fade with time to a less conspicuous brownish-pink; but in 1873 this had been replaced by a far brighter mineral dye that did not fade, and it was in these scarlet jackets that the British army invaded the kingdom of the Zulus, just north of their South African colony Natal.

Whether or not the conspicuous red coats contributed to their defeat, the British were routed by the Zulus at the Battle of Isandhlwana in early 1879. The 1964 British film Zulu, which depicts the heroic British defence of Rorke’s Drift at the tail end of the battle, shows the soldiers wearing scarlet coats that stood out like blazing torches against the dun-coloured landscape of the South African veld; and the Zulu king Cetshwayo is said to have sent his men forth with the command ‘March slowly, attack at dawn and eat up the red soldiers.’ It is just conceivable that ‘red’ was a reference to the British soldiers’ skin colour, for they burned easily in the subtropical heat (the battle was fought in January, midsummer in the southern hemisphere), and the local Afrikaner (‘Boer’) population nicknamed them rooineks (‘rednecks’), to this day a rude word in Afrikaans for British people or English-speaking South Africans; but their clothing seems a more likely reason. Sadly for Cetshwayo and his people, the British returned in force later that year and crushed the Zulu kingdom forever. Today its name survives in the name of the modern South African province KwaZulu-Natal.

In any case, by the time hostilities broke out for the second time between the British Empire and South Africa’s white Afrikaner majority in 1899 (the ‘Boer War’, now also known as the ‘Anglo-Boer War’, or to Afrikaners die Tweede Vryheidsoorlog, ‘the Second Freedom War’), the troops shipped out from Southampton to Cape Town were all dressed in khaki, a word borrowed from the Hindustani (Hindi/Urdu) language of India and meaning ‘earth-coloured’, in turn derived from the Persian word for ‘soil’, khâk. As mentioned at the start of this post, within ten years of the Boer War ending most European armies had likewise adopted neutral-coloured uniforms. The reason was, quite simply, the advent of modern industrialised warfare; and it was the Boer War that made clear how fundamentally things had changed.

Whereas being conspicuous had previously not been considered a problem, from the late 19th century onwards it was an increasingly serious one. The invention of smokeless gunpowder in the 1880s meant that enemy riflemen could no longer be pinpointed by the puffs of smoke from their gun barrels; so snipers were suddenly a far greater danger. Although heavy artillery still emitted some smoke, battlefields were no longer obscured by thick clouds of it; so soldiers in brightly coloured uniforms could now be picked out at a great distance, especially with the steady improvement in optical instruments such as field glasses (binoculars). With the simultaneous improvement in long-range artillery (shells could now be fired over distances of many miles, and had highly accurate sighting), marching into battle in serried parade-ground ranks was increasingly risky, as a single well-aimed shell could kill or incapacitate hundreds of soldiers at one go. And whereas in earlier times soldiers had had to reload their rifles after every single shot, from the late 19th century onwards magazine-fed ‘repeater’ rifles were increasingly common and efficient, allowing the men to unleash an unceasing hail of bullets on their enemies. The terrifying noise would be compared to thousands of frying pans sizzling at once.

Long familiar with the local terrain, the Afrikaners were able to exploit the advantages of mobility, rapidly striking and retreating again; they could all ride on horseback, whereas most of their British opponents could not. And what particularly helped them despite their relatively small numbers – they began the war with fewer than 50,000 fighters – was the discovery of the world’s largest gold deposits near Johannesburg in 1886. Before long the Afrikaner ‘South African Republic’ (a.k.a. Transvaal) was the richest country on earth; and that meant it could afford the very best in modern armaments. These included the latest version of Germany’s Mauser repeater rifles (which came onto the market in 1895) and German Krupp and French Creusot long-range cannon. Sensing that it was only a matter of time before the British would again attempt to overrun his country, Transvaal’s president Paul Kruger ordered large quantities of both – just in time for the Boer War.

Used to fighting local wars against ill-trained, ill-equipped tribesman in far-flung colonies, the British army went into the Boer War on the assumption that they were facing a ‘mere handful of Dutch peasants’ and that the whole thing would be ‘over by Christmas’. To their shock, the war was to drag on for three whole years, with some of Britain’s worst military defeats in its history. The relatively untrained Afrikaners had mastered the art of surprise, knew how and where to hide in the vastness of the veld, and used their brand-new weapons to the best of their ability. Their telegraph communications with Pretoria in the north-east were at least as good as the British communications with Cape Town in the far more distant south-west. In short, this was no ‘mere handful of Dutch peasants’; and they were fighting tooth and nail for their independence.

Although the Afrikaners were eventually defeated – partly by General Kitchener’s barbarous expedient of rounding up their women and children from their farms and confining them in some of the world’s first ‘concentration camps’, where they died in their tens of thousands from disease and neglect – they showed the world’s armies where modern warfare was heading. Although they did not wear uniforms (since they were never a formal army), their rough tweed jackets, slouch hats and crumpled trousers enabled them to fade into the background of the veld. Serried ranks of men in brightly coloured uniforms were henceforth a thing of the past.

In 1914 France again found itself at war with its old enemy Germany; and there were immediate calls to abolish the classic French blue-and-red army uniform in favour of something more suited to modern conditions. But at first this was resisted on emotional nationalistic grounds: le pantalon rouge, c’est la France! (‘red trousers are France!’). Only when French troops, highly conspicuous on the new smoke-free battlefields, began to be mown down in droves by the enemy’s new magazine-fed rifles and artillery was it decided to replace the uniform with a more boring, but far safer, dull blue outfit. And by the Second World War even the French army had switched to a form of khaki, which is what almost all armies now wear.

Gone was the notion of war as a glorious thing that all young men should be proud to take part in. Today we know it for the horror it really is – and one that should surely make us all see red.

Still going strong

A miracle of modern technology that took just a few decades to perfect, the bicycle is still going strong. I use one several times a day, except when I’m away in Slovenia (and that’s only because the town I stay in is so small I can walk everywhere, and has quite a few streets with uneven paving slabs and steep slopes). Hardly enough to keep me fit, but it has to be better for me than going everywhere by car (luckily I don’t have a driving licence to tempt me), and it’s certainly better for the environment. Not that bicycles generate no waste at all: one of my lights needs two new AA batteries once or twice a year (the other runs entirely on solar power and leg propulsion), and a bad puncture will occasionally  (every two years or so) require the inner tube or even the tyre to be replaced. But these days the batteries are collected for recycling, and so, I imagine, is the rubber. All in all, the bike has a very small ‘footprint’, and it is highly energy-efficient: 99% of the force I transmit to the wheels is used.

Bicycles can keep going for decades, and are less subject to changes of fashion than most technical products. To be sure, there are now sports bikes and recumbent bikes; but looking round me as I write this, in full view of several bicycle stands, I can’t see many of either. Sports bikes have the advantage of being lighter and faster; but they have the disadvantage of forcing you to bend forwards with your neck at an angle so you can still see the traffic, and they’re usually designed without a kickstand (to further reduce their weight), so you need something to lean them against when you get off. As for recumbent bikes (‘recumbents’), they’ve never been a success, for riders are so close to the ground that they’re hard for other road users to see, and vice versa; not being able to see over the tops of parked cars not only makes you feel unsafe, it is unsafe. To make sure they’re noticed, most users of recumbents fit them with brightly coloured pennants on thin poles like the ones you see on children’s bikes; but the tops of the poles are at other cyclists’ head level, and I’m always afraid one will poke my eye out as I overtake (the poles are made of flexible plastic and are apt to wave about).

In any case, whole weeks go by without my ever seeing a recumbent – and this is Holland, where there are almost as many bikes as people. Both sports bikes and recumbents have the added disadvantage that, unlike standard bikes, they can’t easily deal with such frequently encountered obstacles as kerbs, loose paving stones or bulges in the road surface caused by tree roots (sports bikes aren’t robust enough, and recumbents are too close to the road). The one undoubted advantage of recumbents – that you’re less likely to hurt yourself if you fall off – hardly makes up for their drawbacks, which include the fact that they’re substantially more expensive than standard bikes (sports bikes are even worse).

The history of the bike began just 200 years ago, in 1817, when a German nobleman called Karl von Drais (rhyming with ‘rice’) invented the first-ever two-wheeled vehicle. He called it the Laufmaschine or ‘running machine’. In English it was soon nicknamed the ‘hobby horse’ or ‘dandy horse’; another name was vélocipède, a French word derived from the Latin for ‘fast’ and ‘foot’; and to this day the everyday French name for a bike is vélo. For a while it was also known as the draisine (in French, draisienne) after its inventor; but this name has since been transferred to larger pedal-driven wheeled vehicles that run on rails. There is currently a draisine service for groups of tourists on a disused section of railway line between a small town near where I live and another small town just across the border in Germany – a pleasant day trip in fine weather.

Although the ‘hobby horse’ – which survives to this day in the children’s ‘push bikes’ (or ‘balance bikes’) that prepare them for the real thing – was certainly a faster means of locomotion than simply walking, and could be steered (the front wheel and handlebars were hinged), it had some major drawbacks. It was terribly rigid, with an unsprung wooden frame and tyreless iron wheels, and riding it on the then invariably cobbled roads was a painful experience (despite the upholstered saddle), since every jolt and vibration was transmitted to the rider. Users therefore tended to take to the smoother but narrower pavements, putting pedestrians (who were not expecting to encounter them there) in serious danger. Worse, it depended for propulsion on exactly the same movement as walking and running: the rider’s feet were in contact with the road surface, and the whole process was tiring. Clearly something had to be done if the new invention were to be a real success.

It isn’t quite clear when the next step forward occurred. A Scotsman called Kirkpatrick MacMillan is said by some to have produced the first mechanically driven bicycle in 1839: still with a wooden frame and tyreless iron wheels, but now with pedals attached by rods to the hub of the rear wheel, which revolved as the pedals were pushed down (so there was no longer any direct contact between the rider’s feet and the road surface, and the amount of foot movement was greatly reduced – less fatigue all round). The front wheel and handlebars were again hinged to allow steering; and the rear wheel, which was used for propulsion, was slightly larger than the front one (in the absence of chain transmission and gears, this was the only way to increase the vehicle’s speed – the larger the drive wheel, the further the bicycle would move with each turn of the pedals). However, the facts of the matter are disputed, and seem likely to remain so.

The first undoubted improvement in bicycle design took place in France in the 1860s, when Pierre Michaux and Pierre Lallement designed a vehicle with pedals mounted on the front wheel – which, for the reasons just mentioned, was now slightly larger than the rear one. This had the advantage that the propulsion force could be transmitted directly to the drive wheel, which was literally turning under the rider’s feet, making him – at this stage in history it was still considered indecent for women to ride bicycles, and their voluminous clothing was unsuited to it – more aware of how the bicycle was responding; the disadvantage was that the pedals were not in the most natural position (directly below the rider’s centre of gravity) and that having them on the wheel also used for steering made the vehicle harder to control. With rubber tyres still some years in the future, bicycles of this era were nicknamed ‘boneshakers’; the whole experience must have been like trying to ride a modern bike with two completely flat tyres.

In 1869 another Frenchman, Eugène Meyer, invented tensionable wire wheel spokes (the spokes had previously been made of solid metal) that greatly reduced bicycles’ weight and allowed the development of high-wheelers, with a vast front wheel (on which the pedals were mounted) and a tiny rear one. The saddle was now almost directly above the pedals, eliminating the earlier centre-of-gravity problem; but the front wheels were now so large that the bikes were very unsafe to ride. A small protruding step on the frame was needed to help the rider climb onto the saddle (and immediately push down on the pedals in order to stop the bicycle from falling over); and the distance between the rider’s feet and the ground was as much as a metre. The slightest obstacle on the road surface, such as a pebble, was enough to send the rider flying over the handlebars, often with serious injuries – it seems that two fractured wrists were a common result, and some riders were even killed as they attempted to break their fall.

With rubber increasingly imported into Europe from colonial plantations in Asia and Africa, solid rubber wheels soon became available; and since the large wheels allowed a smoother ride, as well as far greater speed, the high-wheelers remained popular despite the hazards involved in riding them. In Britain they were eventually nicknamed ‘penny-farthings’, because the different-sized wheels recalled the very large penny coin and the much smaller farthing coin, worth a quarter of a penny – the name is derived from ‘four’ – and still in use when I was a boy (the little farthing was taken out of circulation in 1961, and the big penny ten years later, when Britain finally adopted decimal currency).

Once again, however, something had to be done if bicycles were to become a truly popular – comfortable as well as safe – means of transport; and by the 1880s the solution was in sight. First, tyres that were inflatable, or ‘pneumatic’ (to this day the French word for ‘tyre’ is pneu), which allowed a comfortable and safe ride even on bicycles with human-sized wheels (another Scotsman, John Dunlop, is credited with this invention, but it seems he was reinventing something already designed nearly half a century early by yet another Scotsman, Robert William Thomson); and finally chain transmission with gears (devised by the English inventor John Kemp Starley), which allowed high speeds without having to increase wheel size, and eliminated the need for pedals on the wheel that was used for steering. At the same time, the saddle could now be placed in between the two equal-sized wheels, greatly reducing the risk of serious injury or worse if the bicycle hit an obstacle on the road, since the rider was much closer to the ground and much further away from the front wheel.

By 1890, with all the earlier problems solved – apart from punctured tyres, which are sadly still with us – the ‘safety bicycle’ was born; and despite some relatively minor changes since then, this is the bicycle that most of us still use, almost 130 years later.

A remarkably successful piece of engineering, and one that has greatly contributed to social mobility, the emancipation of women and environmental protection. In short, a damn good thing.

Rail disaster

I’ve passed through Brussels Central Station very often in the past half-century, and I’ve meant to say something about it for most of that time. Now this blog gives me the perfect opportunity to do so.

Quite simply, the place seems hardly to have been refurbished in all those years – and most probably ever since it was first built. It turns out to be a latecomer in Belgium’s extensive railway system, having been opened in the year I was born: 1952. Until then the Belgian capital had already been served for over 100 years by two outlying stations, Brussels North and Brussels South; but it seems there was a need for a new one right in the city centre, which would probably have been a better idea in the first place.

It is important in what follows to remember that Brussels Central was originally designed by the famous Belgian architect and Art Nouveau designer Victor Horta, and completed by someone else shortly after Horta died in the late 1940s.

When I say ‘hardly refurbished’, I’m not talking about the technological accretions that have gone some way towards making up for the station’s many failings. There are now electronic platform signs that announce train departures in alternating French and Dutch versions, to take account of the fact that Brussels is officially bilingual. And the spoken announcements – in French and Dutch for local trains, plus German and English for international ones, all in mellifluous, invariably female native voices – are some of the clearest I’ve ever heard in a railway station, and are only drowned out when a departing or arriving train screeches and thumps its way over the points (might a drop of oil help?).

There are now also modern (though frankly rather inconspicuous) black-and-white signs directing you to the various platforms and station services, using the now familiar international pictograms for ‘exit’, ‘entrance’ and so on. All well and good; but this is not enough to compensate for the fact that Horta designed a modernist architect’s pipe dream, rather than an efficient transport hub for what is now not only Belgium’s but also the European Union’s capital city. Too many architects – and Horta was evidently no exception – seem to see it as their sacred duty to ‘educate’ the rest of us by riding roughshod over such mundane concerns as being able to find your way quickly in a vast structure that is in constant use by tens of thousands of complete strangers.

Brussels Central Station is nothing short of a rail disaster. To start with, it is full of staircases, some of which are now at last also flanked by escalators; but the escalators (like some of the staircases) are very narrow, and at rush hour – which in a place like Brussels is most of the time – become dangerously crowded with people hurrying to get somewhere else. Another big drawback of staircases and escalators is that the differing levels make most of the station invisible to passengers – a problem made worse by Horta’s original heavy marble walls and pillars, which are all in one pale-ochre colour that must have looked very pretty on his drawing board, but in the practical environment of a railway station blend into an indistinct, untransparent mass. In this labyrinth passengers simply don’t know which way to look, or which way to turn – hardly ideal when you’re trying to catch a train, which is after all what stations are for.

Horta’s pale-ochre marble has not – as the euphemism goes – ‘aged well’. Perhaps the small cracks in the surface were there to begin with, but perhaps they have developed over time. In any case, in a place where the air was bound to be laden with fine particulates and other pollutants, the cracks in the marble and everything else soon became clogged with black gunk; and much of the station has always looked (and smelled) grimy, for it seems little has been done over the years to clean it. Litter and bird shit have of course been removed, but Horta’s monumental acres of stone have been left largely untouched.

I can only suspect that the Belgian government was reluctant to tamper with a ‘work of art’ by the illustrious Horta. But a railway station, like an airport or a harbour, is more than just a work of art – it’s a public service.

Horta’s name has survived in the title of a small shopping precinct that obstructs and obscures access to Brussels Central from the city’s most central area, round the beautiful square known in French and Dutch as the Grand-Place/Grote Markt. If you arrive on foot from that direction, there is no indication whatever that this is the entrance to Belgium’s main station – just an uninviting grey-and-white void with still largely unoccupied commercial premises, and again a labyrinth of corridors that provide little sense of direction. Half the time the handful of escalators are out of order, forcing passengers to struggle up flights of stairs (again!), or else a ramp that is presumably designed for wheelchair users but is so steep that many of them might be scared to launch themselves down the scary slope or try to climb it without electrical power, and is twice as long as necessary because it curves ‘elegantly’ round the edge. There is a lift, but this too has been out of order for as long as I can remember (and I’m talking here about several years). Yesterday I saw a middle-aged woman helping an older one hobble painfully up the ramp; the ascent was clearly going to take them at least half an hour, and even then they would only just be entering the station.

Ramp just happens to be the Dutch word for ‘disaster’; but of course this is pure coincidence (?).

Yesterday I also saw that the central area of the to my mind totally useless Horta ‘centre’ was taped off with red-and-white plastic and all the glass doors were locked, forcing everyone (including the two hapless women) to use the ramp. Presumably the area is about to be redesigned – not that there any signs to tell you. But why not tackle the whole station while they’re about it?

Symptomatic of the whole muddle are the makeshift paper signs with arrows and the English words ‘Central Station’ (nothing in the two local languages) that are now taped to every door handle. People entering the area evidently need such guidance because they have no idea where the station actually is; and the only obvious way in – the main staircase (yes, another one) – is now taped off, even assuming you can find it.

Anyone arriving in the EU’s capital city by train must surely be embarrassed to see all this, year after year, decade after decade. Belgium is one of the world’s wealthiest countries, and cannot possibly lack the financial means to do something about this – especially as so little has been done about it for the past 65 years, so that the large amounts of money saved could now finally be put to good use.

Of course, one might argue that it would be an unconscionable waste of public funds to rebuild from scratch a huge railway station that, despite all its manifest visual and practical failings, continues to function without bringing the whole of Belgium’s railway system to a grinding halt – and that there are far too many costly ‘prestige projects’ as it is, at a time when more and more people can hardly make ends meet. But why then have the authorities poured so much money into smartening up the city’s South Station and improving its efficiency?

The answer can only be that the South station is the Belgian hub for three high-speed international rail networks that are mainly patronised by better-off Europeans and wealthy foreign tourists: the Eurostar routes between London, Paris, Brussels and Lille; the French-based Thalys routes serving Paris, Brussels, Amsterdam and Cologne; and the high-speed TGV (Trains à Grande Vitesse) services that now not only span the whole of France but also extend into Italy, Switzerland, Germany and Belgium. Were there no smart, comfortable, efficient facilities, these routes would surely lose passengers who could as easily afford to fly or simply drive their cars (and pollute the environment at their leisure).

Brussels’s three stations neatly reflect a sadly predictable hierarchy in national and international transport. Serving the up-market Eurostar, Thalys and TGV, the South Station is at the ‘high end’ of the hierarchy. The North Station effectively only caters to local people (including many poor immigrants) and is located in one of the city’s less salubrious districts, which tourists seldom see; so it has long been the most neglected of the three stations. In between these two extremes is the Central Station, originally designed to give both Belgian and international passengers a convenient point of access to the city’s most attractive and prestigious districts, sights and shops.

Had the station not been designed within living memory by the country’s most famous architect and urban designer, its unmistakable problems might have been tackled by now. Instead, it seems the Belgian government has continued to put its funding into the station whose users could easily travel by another means, has withheld funding from the one whose users have little choice in the matter, and has hedged its bets about the semi-international one whose users fit into both categories.

Bad planning, bad social policy – and, in an age when money calls the tune more than ever, something we will not soon see the back of.

Rule of lawyers

Back in 1970 a new supermarket chain was launched in Britain. It seems the founder was a personal friend of a former Icelandic prime minister, and it occurred to him that Iceland would be a clever name for his new business, since it specialised in frozen food. The fact that it was also the name of a country (in English, though not in any other language) was an added benefit, and did not appear to bother the authorities in Reykjavík – at least, not then.

But times have changed. We now live in an age when economic interests predominate over all others. Businesses and other organisations go out of their way to defend their commercial or other rights against whatever they consider infringements, however minor; and their weapons of choice are lawyers. But in this case it is not, as you might think, the Icelandic authorities that have taken the supermarket chain to court – it’s the other way round. Incredibly, the company has managed to secure a Europe-wide patent on the name ‘Iceland’, and is now suing Icelandic businesses that trade abroad and use the name of their country in their marketing for trademark infringement.

The absurdity of this is mind-boggling. For one thing, there has always been a principle in patent law that everyday words and names can’t be patented, as this would prevent people from using their own native languages freely. You’d expect this principle to apply automatically to names of countries; but somehow the supermarket chain persuaded the European patent agency to rule otherwise, and now the Icelandic government is trying to get the patent invalidated on the all too reasonable grounds that it makes it difficult for Icelandic companies to do business. It makes particular sense for them to use English names, because (1) Icelandic names are difficult to pronounce and understand in other languages (to take just one example, the national airline Icelandair is the result of a merger between two companies called Flugfélag Íslands and Loftleiðir), and (2) the name of the country in Icelandic (Ísland) would clearly be confusing, at least in English (of course, it is an island, but that’s not the point).

No-one is trying to stop the British company from operating under the name Iceland; but the company is trying to stop anyone else from doing so. Perhaps Iceland the country could have saved itself a lot of bother by taking Iceland the company to court back in 1970 for misusing the name – but in those days most people’s minds just didn’t work that way. They still relied on a basic sense of decency and common sense.

Next thing you know, the UK poulterers’ association will manage to get the name ‘Turkey’ patented, and Turkish businesses will find themselves in a similar predicament. Porcelain manufacturers and China? Hat-makers and Panama? Spice-growers and Chile? And on and on.

There is – or was? – a Dutch band called Dow Jones, named after the American stock exchange index. Some months ago I read in a local newspaper that they had received a letter from lawyers in New York enjoining them to change their name forthwith or face legal proceedings. You wonder how the use of the name by a little-known group of musicians in Europe could possibly harm the American company (if anything, you might consider it free advertising, for which anyone should be grateful); and you can only suspect that lawyers scented a chance to get rich.

Unlike some company names, however, Dow Jones has not become ‘generic’, i.e. the everyday word for the product concerned, irrespective of brand. The classic case is the American vacuum-cleaner manufacturer Hoover, whose name became synonymous in British English with vacuum cleaners generally and the whole act of vacuum-cleaning or anything resembling it (‘I’ll hoover the carpet’, ‘he just hoovers up his food’). Companies eventually risk losing their trademark protection because their brand name has become ‘generic’ in this way. Examples include ‘zipper’, ‘velcro’, ‘aspirin’, ‘kleenex’, ‘frisbee’ and ‘videotape’.

But back to geography. When Yugoslavia finally disintegrated into its six constituent republics, all but one had names that preceded the establishment of the unified state: Serbia, Croatia, Bosnia-Herzegovina, Slovenia and Montenegro (a literal Italian translation of the original Slavic name Crna Gora, meaning ‘Black Mountain’). But the sixth republic did not. Its name was Macedonia.

However, simply calling it this is enough to raise national and political hackles – above all in neighbouring Greece, where most people still insist on calling the country Σκόπια (‘Skópia’), after its capital city Skopje. The official Greek stance on the matter is that Macedonia has ‘no right’ to use that name, for it is already the name of a Greek province: Μακεδονία (‘Makedonía’).

To most people outside Greece, the Greek arguments seem specious; for there are quite a few examples in the world of countries (or regions) that have the same name as a region in another country, and no-one seems bothered by the fact. Belgium’s south-eastern province is called Luxembourg; one of the three divisions (‘parts’) of the English county of Lincolnshire was called Holland (until it was abolished in a local government reorganisation in 1974); both Austria and Slovenia have provinces known in English as Styria; both Belgium and Holland have provinces called Limburg and Brabant, though the Belgian one has since been divided into Dutch- and French-speaking sections known as Vlaams Brabant (‘Flemish Brabant’) and Brabant Wallon (‘Walloon Brabant’), and the Dutch one is officially called Noord-Brabant (‘North Brabant’) to distinguish it from its southern neighbours in Belgium, with which it was once united); there is a Mexican peninsula called Baja California (‘Lower California’), and one US state is called New Mexico; both Peru and Argentina have provinces named after the Spanish region of Rioja; and so on.

So by what right does Greece lay exclusive claim to the name ‘Macedonia’? The answer, if it can be called one, is Alexander the Great. Although the evidence is less than abundant, it seems the language spoken in the ancient kingdom of Macedon (or Macedonia) was related to Greek; and King Alexander’s vast empire, which at its height extended as far as the River Indus in modern Pakistan and as far down the River Nile as Thebes (the site of modern-day Luxor), was essentially Greek-speaking, or Hellenistic. This was also the high point of Greek history; although such city-states as Athens and Sparta were to exert great influence, the Greek-speaking world would never again be as unified as it was under Alexander (the later Byzantine empire’s eastern borders extended no further than modern Syria and Jordan, and it only briefly held Italy and small parts of North-west Africa and Spain).

Since Greek influence has steadily declined since ancient times, and the whole country was occupied and oppressed by Ottoman Turkey for four centuries after the fall of Byzantine Constantinople, it is perhaps not surprising that many Greeks look back to Alexander’s Macedonian empire as a lost ‘golden age’ of Hellenism.

But how did one of the six former Yugoslav republics come to be known as Macedonia? Here the answer is Marshal Tito. Being of half-Croat and half-Slovene descent, he was well aware that Serb nationalism and expansionism had always been the bane of Yugoslav politics, and was determined to clip Serbia’s wings, however gently. Part of this involved turning Serbia’s territorial gains along Greece’s northern border (hitherto referred to as ‘Southern Serbia’) into a separate new republic; the local population spoke a language far more closely related to neighbouring Bulgarian than to Serbian and Croatian, and this seemed to justify a new identity. But Tito would surely have saved everyone a great deal of trouble after his death if he had not decided that the new republic should be called Macedonia.

As long as Yugoslavia remained a unified state, there was no real problem; but all that changed when the country broke apart and, one after the other, its constituent republics declared themselves independent. When Macedonia voted for independence under this name in September 1991, Greece promptly denied its right to use it, on the grounds that the name was ‘exclusively Greek’. Non-Greeks who disputed this interpretation were told to ‘read history’ – whereas in fact the historical facts are, to say the least, ambiguous. And since, for good or ill, the inhabitants of present-day Macedonia choose to call themselves ‘Macedonians’ and their language ‘Macedonian’ – as they do – who are the Greeks to say them nay?

But of course matters are not quite as clear-cut as all this. One of the firmer parts of Greece’s argument against allowing its neighbours to use the name is indeed based on history – but far more recent history than many Greeks would like to believe. Alexander the Great’s empire has little to do with the case. Once again, we are talking about 19th- and 20th century Serb nationalism and expansionism.

As the first south-east European state to gain independence from Ottoman Turkey without immediately falling into the competing clutches of Austria’s Habsburg empire, Serbia had hopes of becoming a major regional power; but unlike all its neighbours it had the serious disadvantage of being landlocked. It was cut off by its geography from all three of the nearest seas (Black, Aegean and Adriatic); and its future prosperity would surely depend on having a major seaport through which to export its products. The obvious candidate was Salonica, known to the Greeks as Thessaloníki and to the Serbs and other Slavs as Solun.

Although Thessaloníki is now known far and wide as Greece’s second-largest city, all this was in fact a matter of chance. As in that other famous Mediterranean seaport Trieste, its  population had always been a colourful mixture: in this case Greek, Slav, Jewish and Turkish. And at the end of the Second Balkan War in 1913 the Greek army only beat its Bulgarian competitors to the city by a matter of hours. Both Greece and Bulgaria already had coasts and seaports of their own; but Serbia lost out in the race, and was instead forced to seek a new coast and seaports on the Adriatic, which involved absorbing Croatia and Slovenia under the cloak of a ‘united’ Yugoslavia.

Having annexed Thessaloníki by the very skin of its teeth, Greece was only too aware of its northern Slav neighbours’ designs on the Aegean coast; and when Macedonia declared itself independent, Greece eagerly sought evidence that the old Slav ambitions of an outlet to the Mediterranean were still very much alive. Unfortunately, Macedonia did not take long to provide precisely that explosive evidence.

Its flag is to this day based on a historical symbol that the Greeks see as their own: the ‘sun’ of Vergina. There are so many other flag designs it could have chosen – but it chose what was surely the most provocative of all the conceivable alternatives. A nationalist organisation in Macedonia then printed unofficial banknotes that showed the iconic ‘White Tower’ in Thessaloníki. The Macedonian government printed its own far more neutral banknotes; but the fact that it had allowed the other notes to circulate in the first place, rather than prosecute the counterfeiters, was of course grist to Greece’s nationalist mill, as supposed proof that, deep down, its northern neighbour sought access to Greek territory on the Aegean coast.

Finally, and worst of all, in 2006 the Macedonian government decided to rename its own international airport in Skopje after Alexander the Great – a politically inept act that could not have been better calculated to inflame passions at a time when Macedonia was seeking greater international recognition. There’s no reason to name airports after people at all – and why, of all people, someone as divisive as Alexander the Great?

Then again, why did Greece have to rise to the bait – as it predictably did?

Perhaps here we’re no longer talking about lawyers. But I still bet they’re doing damn well out of all this acrimony; for all too often it’s their stock-in-trade.