October 13, 2018
by chris

Blog software update

During the last days i updated this blog to a new WordPress version and a new PHP version which resulted in some unplanned downtime yesterday because of some messed up testing due to delayed configuration changes. But everything should be working now again. If there are any problems please let me know in the comments.

Installing this update i noticed that over the last years i have shown more than 1700 images here – to illustrate a screenshot of the media library:

October 10, 2018
by chris

OpenStreetMap – challenges of a changing world

I was preparing material for a talk i am going to give at the Integeo conference in Frankfurt next week and it reminded me of a topic i wanted to write about for some time here. The talk is going to be about the role OpenStreetMap and open geodata in general had and have for the development of digital cartography from a mere dematerialization of pre-digital workflows towards rule based cartography where the work of a cartographer is no more primarily the processing of concrete data but the development of rules defining the automated generation of a cartographic visualization from generic geodata. I previously presented this idea with a somewhat different focus back at the FOSSGIS conference in Bonn.

Thinking about this subject i remembered the realization i had some time ago that while the success of the OpenStreetMap project is usually attributed to the openness and the community production of the data this is only half of the story. I will go out on a limb here and say that – although i can obviously not prove this – the success of OpenStreetMap is at least to the same extend the result of OSM taking the revolutionary approach of producing a completely generic database of geographic data. In the domain of cartography this was completely unheard of. And i am not even sure if this was a conscious choice of the project at the beginning or if it was just the luck of approaching the subject without the preconceptions most cartographers had at the time.

And today it is my perception that it is not so much the volume of data, its quality or its free nature that makes even more conservative people in the field of cartography realize the significance of OpenStreetMap but its ability to maintain and widen its position in a quickly changing world with very little changes in the underlying base technology and with hardly any firm governance. There have been quite a few voices in the OSM community in the past few years criticizing technological stagnation within the project – a critique that is in parts not without basis. But one of the most amazing things about OSM is that despite such issues the project is able to manage the growth over the past 14 years without fully re-building the foundations of the project every few years like almost any comparable more traditional project would have had to. And there is no reason to assume that this cannot continue for the foreseeable future based on the same fundamental principles. Although i specifically only refer to the core principles of the project and not everything that developed around it.

All good you could think and proudly lean back but that is not the whole story of course. Since OpenStreetMap at the beginning was relatively alone with its revolutionary approach to cartography it had to do most of the things on its own and out of necessity became a significant innovative force in cartographic data processing. Later the huge domain of Open Source geodata processing and open data formats and service standards developed parallel to OpenStreetMap with also a few tools having OSM data processing as a primary initial use case so OpenStreetMap continued in many ways to drive innovation in cartographic technology (although you need to also give some credit to Google here of course).

With institutional cartography starting to adopt the ideas of rule based cartographic design these tools and the possibilities they offer are not exclusive to OSM any more though. While 5-8 years ago you could usually spot an OSM based map from a distance simply due to the unique design aspects resulting from the underlying technologies this is no more the case today. Map producers frequently mix OSM and non-OSM data, for example based on regional cuts, without this being noticeable without a close look at the data.

In other words: OpenStreetMap has lost its complete dominance of the technological field of rule based digital cartography. This is not a bad thing at all since OSM is not a technology project, it is a crowd sourced geographic data acquisition project – and in that domain its dominance is increasing and not decreasing. Still this development has a significant impact on the project because OSM does not operate in its own separate ecosystem any more it originally formed by being so very different from traditional cartography and where the only visible competition were essentially the commercial non-traditional cartography projects (Google, Here etc.). Now this field has both widened and flattened. And in this widened field there are other data sources used, in particular on a regional level but also global data sources generated using automated methods and crowd sourced data like from Wikidata as well as value added derivatives of OSM data and OSM competes with those on a fine grained level without there being that much technological separation any more due to different cartographic traditions.

As said the risk OpenStreetMap faces as a result of this development is ultimately not its position as an open geodata producer. The main risk in my eyes comes from the reflexes many people in the OSM community seem to react with to this development because they at least subconsciously perceive this as a threat. I see two main trends here:

  • turning away from the principle of generic geodata and the principle of verifiability that made OpenStreetMap successful. People see the strength of OSM being its large community but don’t have the faith that this will in the long term be able to compete with the combination of institutional data producers, bot mapping and remote sensing in the field of verifiable geographic information. So they want to re-claim the field the institutional data producers abandon at the moment because it is no more sustainable for them, the field of subjective, hand designed cartography data – and the part that is so far specifically not included in the scope of OpenStreetMap.
  • trying to hold on to the comfort of isolation from the rest of the cartographic world by ignoring what happens outside of OpenStreetMap or by declaring everything outside of OSM as irrelevant. The large gap between OSM and traditional cartography always brought with it a constant risk of becoming self referential in many aspects, especially as the project grew. The wide adoption of OSM data outside the immediate project environment however counteracted that quite well. But still there is a trend among some people in the OSM community trying to blend out the complexity of the world and in a way trying to become self sufficient.

I think these two trends – no matter if they are exclusively a reaction to the developments described before or if there are other factors contributing to this – are probably among the top challenges OpenStreetMap faces these days. As said the project’s core ideas (generic, verifiable geo-data based on local knowledge of its contributors) are solid and could likely carry the project for the forseeable future but only if the OSM community continues to put trust and support in these principles.

I will probably write separately in more detail about the anti-verifiability tendencies in OSM in a future post.

Another development related to this is that while in the OpenStreetMap ecosystem we have an almost universal dominance of open source software the world of institutional cartography is also strongly shaped by proprietary software. It is no coincidence that Esri a few months ago showed a map service based on proprietary software that clearly imitates the OSM standard style, which is kind of a symbol for rule based cartography in OpenStreetMap. It is clear that companies offering proprietary software will not stay away from rule based cartography. And with institutional customers they are not in a bad starting position here.

This is of course less of a problem directly for OpenStreetMap and more for the OSGeo world.

September 22, 2018
by chris

The Essence of OpenStreetMap

Yesterday Frederik has asked in the German language OSM-Forum what community members perceive to be the Essence of OpenStreetMap (in the sense of “What are the essential aspects of OSM without which it would not be OSM any more?”). This is a very interesting and important question. And i believe that the answers people would give to this question and how this develops over time says a lot about the state and development of the project. Unfortunately it is of course quite difficult to get accurate answers to such an abstract and difficult question.

Here my attempt at this. For me the following aspects are essential for OpenStreetMap:

  • Mapping by people for people – this means data acquisition is under control and responsibility of equal individuals and the purpose of data acquisition is primarily use by individuals.
  • The verifiability of data – the core of this for me is in particular the differentiation from projects like Wikipedia, which reject the on-the-ground observation and instead try to document the dominant view of a society of the reality.
  • Regarding the not directly mapping oriented parts of the OpenStreetMap project – i am primarily thinking of tagging discussions, development work or the development and discussion of practical rules etc. – for me the meritocratic principle is of essential importance here. This means decisions are founded on an evidence based discussion using arguments and verifiable observations as basis.
  • The social contract – which on the one side consists of the open license and the duty to attribute OpenStreetMap and the share-alike rule for the data as the social contract between mappers and data users. On the other side there is also a social contract among mappers – based on the principle of equal rules for everyone and the primacy of the local community (giving local mappers ownership of their map).

As most probably see these principles are strongly interrelated, removing one of them would lead to a significant imbalance and immense social shifts in the project.

That these principles are questioned by individuals from time to time is natural and not a particular problem – on the contrary it helps encouraging people to question their own assumptions and principles. A question i have asked myself just recently is however if there is still a majority among people active in OpenSteetMap for these principles. Just because someone likes the advantages and conveniences the success of the project offers this does not mean he or she necessarily embraces the principles and values that led to the success of the project and that are necessary for the future continued success of it. What i more frequently observe recently is that people – often because of short sighted and egoistic motives – question core principles of the project without realizing that this essentially means putting an axe to the tree you are sitting on.

The principles listed above are just my personal view of the essence of OpenStreetMap of course. Others might set different priorities here. But i would recommend to everyone to reflect and critically question

  • if what you perceive to be the essence of OpenStreetMap is actually a viable basis to carry the project in the long term.
  • if these principles are shared by the majority of OSM contributors.

September 19, 2018
by chris

Arctic autumn

This year’s northern hemisphere summer – while bringing a lot of dry and sunny weather in Europe – was a fairly moderate summer in the Arctic with relatively limited good satellite image recording possibilities in particular at very high latitudes. The USGS has continued recording a few Landsat image series in northern Greenland outside the regular Landsat coverage (i covered this before) but they seem to have this fixed on path 40 with no adjustment to the weather and as a result most of the recordings are with a lot of clouds. The best one this year was the last one which already features some fresh show though. I supplemented this with other images from around the same time from further south into a high latitude mosaic covering the northernmost and also most remote area of the northern hemisphere:

Image recordings from high latitudes as previously discussed vary more significantly in lighting than at lower latitudes creating additional difficulties when assembling mosaics. The off-nadir pass defining the northernmost part of the image is characterized by a later local recording time in the earlier images more to the east. That sounds a bit absurd but makes sense if you consider that the satellite is faster in East-West direction than the Sun in its daily movement in this area.

The lower sun position leads to a higher fraction of atmospheric light filtering and a larger significance of indirect light resulting in a more reddish/violet tint of the images. To match this i tried to choose images further south matching this characteristic – which is of course not perfect due to constraints in weather and because image recording plans are selective.

Here a few crops from the mosaic:

The mosaic can be found for licensing in the catalog on

Another area of the Arctic i would like to show is the Matusevich Ice Shelf in Severnaya Zemlya. I covered its retreat over the past decades in a previous post. This year you could see – in the few clear views within a mostly cloudy summer – that the connection to the Karpinsky Ice Cap is now nearly gone and the small remainder of the Ice Shelf is just fed by the Rusanov Ice Cap now. Here a Sentinel-2 image from the end of August with – like in northern Greenland – already a little fresh snow.

For comparison here a 2016 view of the area – for earlier views see the post linked to above.

August 30, 2018
by chris

More on pattern use in maps

Area fill patterns and their application in cartography are a topic i have written about previously already – when we are talking about simple uniform patterns some of the most important advantages in practical use are

  • you can differentiate areas without using different colors – and can differentiate much more because the number of discernible patterns you can use is much larger than the number of discernible colors.
  • intuitive understanding of the meaning of a pattern is easier to accomplish than with colors alone. With colors you have a few overall conventions (like blue for water and green for vegetation) and sometimes the rule of similarity (similar colors have similar meaning) but beyond that the meaning of colors in a map is usually something you need to look up and learn or learn based on examples while with a good pattern there is often at least some possibility to deduce the meaning from the shapes and structure used in the pattern.

I here want to discuss a few examples how this can be used to design a better map.

Industrial land use

The first example is something i implemented in the alternative-colors style some time ago but i had not discussed so far. It concerns industrial land uses in the strict sense – with that i don’t mean landuse=industrial which refers to land on which industrial infrastructure is built, i mean areas where the land itself is used for industrial purposes, specifically landuse=quarry (mining, extraction from material from the ground), landuse=landfill (where material is deposited), landuse=construction (where stuff is built) and landuse=brownfield (as a precursor to construction).

In OSM-Carto these are currently rendered with three different colors, brownfield and construction are identical. At least two of these have the problem of being too similar to other colors used with a very different meaning. This is a symptom of the standard style having reached the end of the line regarding colors. It is almost impossible to introduce a new color without clashing with existing ones. Most more recent color choices there suffer from this problem.

What i did now for industrial land use to avoid this is choosing one color instead of three for all four landuses – which as explained share common characteristics – and differentiate between them at higher zoom levels using patterns. Here is how these look like.

industrial land use patterns

While most of them (except maybe quarry) are probably not completely clear intuitively to the average map reader – this is in a way the price you have to pay for a small grained pattern that can be used on relatively small areas – there is certainly a better possibility for intuitive understanding than with plain colors.

Agricultural details

The other example is display of additional details on agricultural landuses, specifically landuse=farmland and landuse=orchard.

First a bit of information on tagging – the differentiation between different plants being grown on agriculturally used land is complex and somewhat arbitrary in OpenStreetMap. This starts with the primary differntiation into different landuses. Specifically we have:

  • landuse=farmland – which is either used as a generic tag for all agriculturally used land or (more commonly) only for uses that are not covered by one of the other following tags
  • landuse=meadow for areas of pasture or where grass is grown for hay
  • landuse=orchard – which is for trees and scrubs grown for fruit and nut production (or other products except wood)
  • landuse=vineyard – which is essentially a separate tag for a subtype of landuse=orchard

Further differentiation is done using crop=* (for farmland) or trees=* (for orchard) or produce=*.

The distinction between landuse=farmland and landuse=meadow is not handled very consistently – there are 19k occurances of landuse=farmland + crop=grass. And what is to be tagged as landuse=orchard is also somewhat strange – tea plantations are mostly tagged as landuse=farmland + crop=tea, likewise for coffee – though there are also some tea and coffee plantations tagged as landuse=orchard.

The four listed landuses are rendered in distinct styling in the standard style – originally all in different colors, i wrote about the history of the farmland color recently. A few years back i unified orchard and vineyard to use a common base color and only differentiate them by pattern reflecting the strong similarity between them.

All of this demonstrates a British and Central European perspective based on the forms of agriculture common in these regions. For the structure in tagging this is natural given the origin of OpenStreetMap. Tagging ideas, once broadly established, are pretty hard to change. But that is not the core of the problem. OpenStreetMap has a free form tagging system so mappers elsewhere are free to use their own tags or establish supplemental tags. Since completely new tags have the disadvantage that data users, in particular maps, will not use them at first, supplemental tags are the more common choice. And therefore we have a lot of supplemental tags for landuse=farmland and for landuse=orchard for crops that do not grow in Britain and Central Europe.

And here is where the problem starts – map style developers rightfully say they can’t differentiate all of the different crops and trees that are tagged in rendering in a general purpose map and therefore limit differentiation to the main landuse classes, classes which as explained though represent a very strong geographic bias. What makes the problem worse is that style developers seem completely unaware of this in most cases, they usually consider the classification to be a natural ang globally valid one, even though the fairly arbitrary line between landuse=farmland and landuse=orchard and the special separation of landuse=vineyard well demonstrate that the opposite is the case. If there is a discussion on rendering farmland or orchards you can almost universally see how in the minds of developers these manifest as fields of wheat or corn (or similar) or apple trees and berry bushes and there hardly ever is reflection about how biased this view is.

A map style aiming to provide mapper feedback has an additional problem here. Not differentiating the main landuse classes but differentiating different crops (like unifying orchard and vineyard but differentiating two classes of orchard based on other criteria) could be irritating for the mapper. But if you are fine with differentiating to some extent it is imperative to not look at this with a narrow European mindset but with a more global perspective.

This is what i tried to do with farmland and orchard rendering – differentiating three different subtypes for each by using a pattern in addition to the generic rendering.

new farmland and orchard types

The types chosen are based on how distinct the different types of plantation are to the observer and how widespread and consistent use of the corresponding tags in OpenStreetMap is. I specifically did not include any type of grain other than rice because in many parts of the world growing of grain is done with crop rotation – therefore a specified single crop is frequently inaccurate. The types of crop i chose are usually not rotated.

Here a few real data examples for orchard types:

olive tree orchards at z16 – click to view larger area

oil palm plantations at z15 – click to view larger area

banana plantations at z16 – click to view larger area

And here for farmland types:

rice fields at z15 – click to view larger area

hop plantations at z15 – click to view larger area

tea plantations at z16 – click to view larger area

The symbology is derived from the appearance of the plants in question in the field and explicitly not as sometimes practiced based on the product harvested. This matches the approach used otherwise in patterns in the standard style. The reasons for this i have previously explained.

The whole thing is right now of course a depictions of individual types of crops and it is – as already indicated – not practicable to depict all types of plants grown in agriculture world wide with distinct symbols. Therefore there will be a need to group and classify products, even if some of the plants i now show could due their overall importance and special characteristics constitute a class on their own. Such a classification however must not be exclusively based on the historic dominance of the European perspective in OpenStreetMap as it is currently in the standard style. Developing such a system is new terrain not only for OpenStreetMap but also for cartography in general.

I think this example also nicely shows that when inventing tagging systems it is very bad to base this on some seemingly logical coarse classification system – like the distinction between farmland and orchard – that does not have a clear, verifiable and universally applicable basis. In such cases it is better to resort to defining more limited scope but better defined tags – even if in your culture there are broader terms you might use in everyday conversation.

As usual these changes are available in the alternative-colors style.

August 16, 2018
by chris

New pattern generator version

I had already indicated this when writing about woodland patterns – there is a new version of the jsdotpattern pattern generator available. In this I completely redesigned the symbol selector. This now allows combining arbitrary symbols into sets to be randomly used in the pattern for which previously i used pre-defined sets (which are kept for backwards compatibility).

I added quite a few additional symbols, in particular for trees. Here a few examples – click on the images to generate the pattern in the pattern generator where you can adjust the parameters or save it as SVG.


You can find the program with the latest changes on github.

August 15, 2018
by chris

Rendering implicit embankments

Another post from my series on OpenStreetMap map design – this one is about rendering implicit embankments and cuttings.

Embankments in OpenStreetMap are artificial slopes created mostly to provide a level base for construction of a road or railway or to otherwise artificially shape the topography. Cuttings are a bit like the opposite – an artificial cut into the natural topography created for similar reasons.

What does implicit mean in this context? In OpenStreetMap embankments can be mapped explicitly with a way tagged man_made=embankment drawn along the top of the embankment. This has been rendered in the standard style similar to natural=cliff with a gray line with small ticks on one side indicating the direction. Implicit mapping of embankments means the embankment or cutting is mapped by adding an additional attribute to the road/railway etc. to indicate the presence of it. This is done with the tags embankment=yes and cutting=yes. Implicit mapping using embankment=yes is used more than twice as popular as a tag as man_made=embankment – together embankment=yes and cutting=yes are used three times as frequently.

But none the less embankment=yes and cutting=yes are not rendered by OSM-Carto – because it is somewhat difficult to do so in a reasonably looking way.

What you can do without much problem is rendering embankment=yes as a special casing color similar to the rendering of bridges (or my rendering of fords in the alternative-colors style). But this is rather non-intuitive and cryptic. The more intuitive way to render embankment=yes tagged on a road is to render a line with ticks like it is used for man_made=embankment around the road line. Here is how this looks like in an abstract test:

very simple embankment rendering and the errors this leads to

As you can see this works nicely in very simple cases but fails very badly in some of the more complex situations. In particular if you have roads with seperate lines for both directions mapped separately like motorways this is a serious problem.

To avoid these problems you need to take the context of every road with an embankment into account, in particular other roads with and without an embankment around it. This get complicated and expensive in terms of query performance very quickly. Here is what i came up kind of as a compromise between quality and performance:

more sophisticated embankment rendering

The query for this is about 4-5 times slower than the trivial version in my tests – which sounds like a lot slower but is actually not too bad. Because of the way queries are performed in the rendering framework used this is much less efficient though than it could be in theory. For rendering the roads themselves you need to query all the roads within the tile anyway and you could re-use the results of this query to much more efficiently do the necessary processing for the embankments. But unfortunately there is no way to re-use query results across different layers with mapnik.

Another technical problem that i already experienced with the rendering of springs and water barriers is that for more sophisticated geometry processing you need access to the line widths in the style depending on zoom level from within SQL. To do that without adding a long CASE statement with line width literals everywhere in the queries where the line widths are needed which all need to be modified whenever you want to change a line width i created a script (derived from the existing road colors script) that generates both SQL functions and MSS code for defining and selecting the line widths based on a yaml file.

Implicit embankments and cuttings are rendered for all the usual road types as well as railways and waterways.

embankments and cuttings on all kinds or line features

As you can see i decreased the tick distance on the line compared to the OSM-Carto embankment line signature – this works better with high curvature rendering around roads.

I start rendering the implicit embankments from z16. Starting them at an earlier zoom level would mean taking up quite a large amount of space on the map because normally at z15 and below most roads are already rendered in a line with larger than the actual road and the embankment would further increase that. This would lead to frequent overlap with various features and strange results in some cases, in particular in areas with dense mapping which is counterproductive as a mapping incentive.

Here a few examples at z16:

And here at z17:

The last sample shows both the rendering of implicit and explicit mapping of embankments. The implicit variant is here rendered approximately at its real scale. The implicit mapping has the advantage that with this kind of rendering it looks good both at this and at other scales while the explicit mapping only looks good when the road is rendered in approximately its natural width. If the road rendering is less wide at the higher zoom levels there would be a gap between the road line and the embankment line and if the road is drawn wider than its real width it will overlap an explicitly mapped embankment and will not or only partially be visible. Avoiding this and adding displacement of explicitly mapped embankments at the lower zoom levels would be much more difficult. So for high quality maps with relatively simple rendering implicit mapping of embankments allows better quality results.

If you want to try this change yourself you can as usual find it in the alternative-colors style.

August 11, 2018
by chris

Missionaries for Magic

Once upon a time, a few years ago, there was a startup company called what3words that tried (and apparently still tries) to make money out of selling an address system based on encoding geographic coordinates into a string. To anyone with a bit of background in geodata and geography the idea of making a business out of this was obviously ludicrous but even more ludicrous was the fact that they had some (limited) business success with it.

The thing is the idea of encoding coordinates in a grid system in some way is not in any way new so you cannot patent the idea. And you cannot really claim copyright protection on the encoded coordinates either so the only way you can try to make money out of this is by keeping the encoding system secret and licensing it for people to use.

In essence what3words can probably be considered one of the most successful trolls of our society and our economic system in recent years.

For other companies in the domain of location based services, in particular Google, this was and is a nuisance, not only as competition but also because of the ridicule it brings to the whole domain. So Google’s interest here is not so much grabbing the market share of what3words and making money out of the same thing – they have bigger fish to fry. They just want to get rid of the troll that gives everyone in the field, especially them, a bad reputation.

To do that they did the obvious thing, they created an open, non-proprietary encoding system and push it as the better alternative in the hope that when faced with the decision to take the free solution or buy the proprietary one from what3words people will usually choose the free one – provided they put enough muscle behind it in terms of advertisement and visible endorsement by others.

That’s the background of the situation we have right now. What i already found amazing back when what3words started pushing their system was that the only critique of the whole thing was because of the proprietary nature of it. But there are plenty of other things you can criticize about this idea.

The main sales pitch of these encoding systems is that there are large parts of the world with no reliable and maintained address system, in particular in regions with fast growing populations like in large parts of Africa. So the IT engineers in Silicon Valley think: We can solve than and auto-generate addresses for all these poor people without addresses. That would have been fine if they would have stopped at this point, providing the encoding system to anyone who wants to use it (minus the attempt to make money from this of course in case of what3words).

But this is not what happens right now. Since the main motive of Google is to kill off the nuisance of what3words they cannot be satisfied with just offering their open alternative to everyone interested, they need to push it to beat or at least get close to what3words in terms of market penetration. And the whole humanitarian and development aid sector of course jumps on this because they obviously also want to help the poor people in Africa and cannot idly stand by while Google rolls out the best idea since sliced bread.

Time to take a step back and look at what address systems (which is what the location encoding systems are supposed to serve as) actually are. Sarah Hoffmann covered this nicely in her presentation about Nominatim at SotM. Addresses are the way humans typically refer to geographic locations in communication with other humans. Because they are designed by humans for human use and usually have developed over centuries they vary a lot world wide based on cultural particularities. Address systems usually are essentially modeled after how human perceive their local geographic environment. Because of that designing a Geocoder (the tools that translate between geographic coordinates and addresses) is a fairly complicated task.

Now the coordinate encoding systems discussed above are modeled after what is most convenient for computers, the geographic coordinate representation. The encoding is designed to be human readable and suitable for human communication (with what3words and Google following quite different approaches to achieving this) but it is still a code and you have to either memorize it or look it up, you have no mental geographical context for your address in this form. Since the encoding algorithm is nothing you would realistically perform in your mind using such a code in place of a traditional address requires essentially treating it as a magic code. In other words: The only way you can establish a system like this as an address system for human-to-human use is to detach it from its original meaning and treat it as pure magic.

This is what people in the humanitarian sector apparently try to do at the moment, bulk generating these location codes for buildings in African countries and presenting these as the addresses of these buildings to the people living there. Some of this effort is now swashing over into OpenStreetMap where of course storing codes in the database which are just an encoding of the geographic location is ludicrous but from the mindset of the people involved in those projects it makes sense, to get people to adopt these codes in human-to-human communication and thereby give them an actual social meaning you have to – as explained – establish them as magic codes detached from their origin.

I find the attitude underlying these efforts (both if based on a proprietary and an open encoding) pretty cynic and inhuman. Instead of helping and advising people in African villages in developing their own local address system based on their local circumstances and specific needs you develop a system of magic codes chosen because it is convenient to program and nudge people in Africa to organize their lives around this system of codes. The arrogance and ignorance of history that shines through in this is fairly mind-boggling.

Now to be clear about this: I think most people voicing their support for such location code systems these days are probably blissfully unaware of this background, which is partly why i explain it here.

And there is nothing inherently bad about encoding geographic coordinates in some form. It is mostly pointless but it can have its uses, in particular in human-to-computer interaction. But then we are not talking about an address system any more but about a coordinate specification and encoding system.

By the way what Google is now pushing is just a more primitive version of a pretty old idea. Google’s system degrades and fails towards the poles – a problem that can be easily avoided by putting a tiny bit more brain into it. But Google as usual is satisfied with a 90-percent-solution.

Update: Frederik has written a FAQ on the subject addressing a number of practical questions around it.

August 4, 2018
by chris

On the discussion on OSMF supported vector tiles maps

After my initial report on the SotM conference in Milano there are a few things i would like to discuss in more depth. The first one is the vector tiles on topic.

A bit of background first: The term vector tiles has in the OSM community kind of been a white elephant for the past couple of years. I think over the years vector tiles have been proposed as the magic solution for just about every problem in community map production that exists.

When used with actual meaning and sense and not just as an empty buzzword the term is used for two fairly different things:

  • for the idea of caching the results of queries in a tiled map rendering framework. In a classical rendering system like OSM-Carto the map tiles are generated based on a number of queries of the data from a spatial database (usually postgis) for the area of the tile rendered. The results of these queries are thrown away right after the tile has been rendered. By caching the query result you can much more efficiently render different styling and tile variants – like different labeling languages, different color schemes or different resolution tiles.
  • for the idea of tiled rendering of the map on the client (web browser) instead of the server based on tiled vector data. This has similar advantages as in the first concept but in particular it allows the map provider to outsource rendering and supplying the computational capacity for it to the map user.

As you can probably imagine there are technological similarities between implementations of these two approaches so use of the same term for both of them is not without basis. But it is always important when you talk about vector tiles to make clear which of these two ideas you are talking about.

The first approach described above is fairly non-problematic. It is widely used in maps produced today without this necessarily being visible for the user. Of the maps available on several are using these methods and there also exists a port of OSM-Carto that uses server side vector tiles. The second approach however is more tricky because for this to work without serious performance issues the vector tiles transferred to the client must not be much larger than raster tiles. And this is extremely hard and practically it is almost never achieved. For one specific rendering task, the plain color rendering of polygons, i discussed this in more depth some time ago. What map producers currently do to work around this problem is using massive lossy vector data compression. And compared to lossy raster image compression where decades of research went into the methods used and we have have specialized methods for all kinds of specific applications (like raw photo compression) these methods are relatively crude. You can see that in most maps based on client side rendering where the appearance of the map is usually primarily defined by the data volume reduction needs and not by cartographic requirements and considerations and a significant part of the information communicated to the user by the map is compression artefacts rather than geographic information.

So much for general background. What i now want to discuss and comment on a bit here is the proposal from Richard Fairhurst to initiate a project to provide vector tiles for client side rendering (the second approach above) on OSMF infrastructure. There are a few blog posts and messages related to that and there was a breakout session at the SotM conference about the topic.

The alternative-colors style – more eccentric and less purple than OSM-Carto i guess

In general and as i said before i support the initiative to increase the diversity in community created maps available to OpenStreetMap users. But due to the white elephant nature of the topic there is a lot of blind enthusiasm surrounding this that can easily lead to ignoring important things. I want to point out a few of them here – some of them i have already mentioned at the SotM meeting, others are new here.

1) It is currently fundamentally impossible to fulfill all of the functions of the current standard map style with a framework based on vector tiles for client side rendering. This might not seem overly relevant because OSM-Carto at the moment unfortunately all too frequently ignores the requirements of these functions as well (see here) but the fundamental difference is that with OSM-Carto this is a (bad) choice that can easily be changed, with vector tiles for client side rendering this would be much harder.

Most at the meeting at SotM were aware of this limitation but i fear that quite a few people ultimately have a strong desire to push this approach at all costs and just agree to this in an attempt to pacify potential opposition while still believing the white elephant to be the solution to any and all problems. So i will repeat in all clarity here: The client side rendered vector tiles approach is currently fundamentally incapable of generating maps that fulfill all of the core functions of the OSM standard style and there are no technological innovations visible at the horizon that are likely to change that.

2) It would be important IMO to consider and discuss the question if the OSMF should actually become active in this field at all. If you look at the OSMF mission you can see that providing maps for all kinds of purposes is not part of that mission. The fact that the current standard map style fulfills important functions for the OSM community and for the public image of OpenStreetMap, is generally accepted (although you can of course argue how well it actually fulfills these functions). So i think a new, additional map rendering project would – to justify receiving significant support from the OSMF in the form of computer infrastructure – have to state what important functions it aims to fulfill and demonstrate that it is capable to do so within the spirit of the OSMF mission.

There seems to me a significant likelihood that such a project from a user perspective could end up being more or less a knock-off of the OpenMapTiles project or similar and in that case it would IMO be fair to ask if this would warrant the OSMF investing resources in such a project.

3) Paul Norman in the discussion mentioned an important point: The number of developers in the OSM community capable and interested in productively working on map rendering and map design projects in general is fairly limited. This means people will ultimately vote with their feet. This is similar to what i wrote about OSM-Carto last year: The question that will most likely decide on the success of such a community project (no matter if run on OSMF infrastructure or not) is if it can successfully attract competent and committed developers capable of actually achieving whatever goals the project started with.

Not that it matters that much (since i am not very qualified to help kicking off such a project anyway) but i have at the moment rather limited interest in this project myself because my interest in community maps is primarily in those areas for which as said the client side rendering approach is currently unsuitable for. But this is not set in stone and it is quite possible that there are aspects of such a map project that could turn out to be interesting for me as well.

So far for the specific plans for an additional map rendering project on which as said as an increase in diversity in community maps i would support. But i also want to put a bit of a different perspective on the topic of the future of OSM community maps in general:

Richard’s blog post starts with an accurate analysis of what makes OpenStreetMap different from the big corporate players. The visualization or map rendering part of OpenStreetMap – the standard style – has historically been an important part of OSM becoming a very distinct player in the field. OpenStreetMap would probably not have developed into what it is today if its main visualization platform would have aimed to be a Google Maps lookalike. Instead the standard style as i pointed out before has been pushing the boundaries of what is possible technologically and cartographically in quite a lot of things for a large part of its history and in many aspect in a very different direction than where Google and other commercial map providers are going. And it seems to me that this has a significant part in OSM being able to concentrate on its core values and not following short term trends that seem to promise a quick way to success. That this has diminished more recently in OSM-Carto development is something most people more intensively involved with OSM feel and which certainly is a huge part of what motivates people now betting on and pushing for the white elephant so to speak.

But trying to address this problem now by crawling under the wings of Mapbox or other big corporations and making OSMs public image fully dependent on technology developed and controlled by corporate players would in my opinion be a big mistake. Open Source or not – the past experience with Mapnik and Carto has quite well demonstrated i think that OSM currently lacks the resources and expertise to develop or to organize development of a map rendering framework for the specific needs of the project independent of either the big corporate OSM users or the broader FOSS community. OSM will either need to invest into improving capabilities in this field within the project (which is not that feasible because OSM as a project does not have the level of organization or resources for that) or it would need to reach out more to the FOSS community to foster independent development of map rendering tools for OSMs current and future technological and cartographic needs. Projects in that field already exist (like Mapserver for example) but they are currently mostly developed for smaller commercial users and public institutions. Getting these projects to include OpenStreetMap community maps as an important use case or initiating new projects for OSMs needs together with the independent FOSS development community would be a practical approach that could ensure an independent future for OSM in terms of community maps.

And yes, vector tiles (in the generic sense described above, less in the sense of a specific file format the specifications of which are under control of Mapbox) could likely be a part of such developments. But they are not the white elephant many hope for.

August 3, 2018
by chris

SotM Milano – a summary

I have returned from Milano (from a warm northern Italy to a similarly warm southern Germany) and completed viewing most of the talks i missed at the conference that were recorded and that i did not get around attending. Based on that here a quick summary. I might cover some more specific topics in separate posts later.

First a bit of statistics based on the attendee list – which is not completely reliable because it does not exactly represent who was at the conference and because the Company Name is just a free form field. Note to the SotM WG: Please don’t provide the attendee list as a ridiculously convoluted PDF. This won’t prevent Google from actually harvesting the data in there and it makes this kind of analysis much more difficult. I also would consider it very useful if in the future you would during registration ask people for a bit more information on themselves for statistical purposes which could provide a lot of useful insight into the visitor structure of the conference.

There were 355 per-registered attendees according to the list of which 209 have a company name specified (after removing a few obvious errors interpreting the field incorrectly). That is about 60 percent. As said this is not really a reliable indicator but it is clear from it that the majority of the attendees were visiting SotM as either part of their job or their visit being paid by an organization.

The companies/organizations with the largest numbers of attendees were:

Telenav: 8
Facebook: 7
Mapbox: 7
Microsoft: 5
Grab: 5
Heidelberg Institute for Geoinformation Technology: 5
HOT: 5
Politecnico di Milano: 4
MapAction: 4

The geographic distribution of the attendees was as follows:

Naturally the countries with short travel distance brought in the largest number of non-corporate, non-organization visitors. Of the 66 visitors from Italy only 30 have a company name specified. Of the 58 from Germany it is 25. For the United States on the other hand it is 35 of 47. As said the accuracy of these numbers is not very good but overall it seems quite clear and understandable that when coming to the conference requires a long and expensive journey this significantly reduces the likeliness that a hobbyist community member will come. This seems to be confirmed in conversations because when talking to people from outside Europe most seemed to have either some business connection or are involved in some project that goes beyond a hobby.

The scholarships

There would probably be quite a lot to be said about the scholarship program but so far we seem to have no information on the scholarships beyond what can be found in the program booklet which lists the names of 17 OSMF scholars.

The program

As i already wrote in the pre-conference post the program was not really of particular interest for me. There was no talk i considered a must see and after looking over most of the talk recordings this seems to be confirmed. This absolutely does not mean the talks were bad or that they were not interesting for me – not in the least. But i did not try to watch as many talks as i could but instead spent more time talking to people. This is a bit of a dilemma of course since listening to talks can also be a good starting point for approaching others and starting a conversation.

Since not all the rooms were recorded on video this also meant that i missed a few of the talks without the opportunity to watch them afterwards. I however hope there will be a more or less complete collection of slides available for all the talks – if you gave a talk and have not yet sent the slides to the organizers please do so.

Meeting people

As already indicated meeting and talking to people was my main goal for the conference. There were good opportunities for that although with more than 350 people there were also plenty of cases where you failed to meet someone for the whole three days because you just never really ran across each other. One thing that worked amazingly well was being introduced to others by someone who already knows both people. Christine Karch in particular seemed to be very industrious at that. This is something i can very much recommend to others at such conferences – if you are interested in meeting someone but you are either reluctant to simply walk up to them or you just can’t find them because you don’t know how they look you can just ask someone who knows both of you to make the introduction. Such introductions can also help bridging language barriers by helping out with a bit of translation.

I in particular enjoyed meeting and talking to Dorothea Kazazi, Martin Koppenhoefer, Nicolas Chavent and Rafael Avila Coya all of which i never had met in person before – but of course also many others who i had met before.

The social event

The place of the social event was nice and the food was good but it was not ideal for an OSM conference in several regards:

  • The constraints of entering the place (practically the requirement to wear shoes and that you were not allowed to take larger bags or other things into the place but had to deposit them at the entrance) were something the organizers should have announced in advance. One person from the German community who was routinely walking barefoot and had no shoes with him that evening was not allowed to enter and many were uneasy with leaving their bags with valuable stuff like laptops or cameras.
  • For most of the conference visitors the social event is primarily an opportunity to talk to other visitors of the conference. The music played at the place of the social event that got louder the later it was, made this somewhat unnecessarily difficult.

The awards

Since i somewhat unexpectedly won the award for influential writing (sorry Anonymaps) it seems somewhat ungrateful to criticize them – but i will do it anyway. Apart from the general and hard to solve problem of English language bias which i mentioned previously i also have a problem with the innovation category where none of the nominees would qualify for what i would consider innovative work. This was similar in previous years. I would probably just remove that category from the awards in the future. The way the awards are run they are essentially a popularity contest and popularity and innovation are simply two things that normally do not go hand in hand, innovations if they do at all typically only become popular quite some time after being made and the awards are for stuff made in the previous year.

I would also suggest two further changes:

  • limiting the awards to individuals and small groups of identifiable individuals.
  • adding a ‘none of the above’ option to the voting form and not issuing the award if this option receives more votes than any of the others.

In any case congratulations to the other winners who apart from the wrong categorization in the innovation category i would without reservations all consider deserving winners – without necessarily meaning that the other nominees would all have been less qualified. We all for example had a good laugh about the fact that Simon Poole lost to Richard Fairhurst by one vote after having previously given a recommendation to vote for Richard.

Next year

In my pre-conference post i mentioned that it is unlikely that the SotM is going to take place as close to where i live as this year any time soon – seems i was wrong about that. For me Heidelberg is obviously convenient but this also means there is a clear trend for the SotM being more concentrated on Europe again – with three out of four taking place in Europe. This contrasts with the four years before where three out of four were outside of Europe – kind of in compensation to the first four years which all took place in Europe.

Some general thoughts on the conference

For me personally the SotM visit was a pleasant experience. I however have a seriously uneasy feeling about the fact that the SotM claims to be a conference for the whole OSM community which it from my perspective clearly is not. Given the size and diversity of the OSM community this claim seems unrealistic anyway but maintaining the pretense kind of stands in the way of developing organization and structure of OSM conferences in a direction that is sustainable and productive for the project.

What SotM practically consists of currently seem to be three groups of people:

  • the business visitors who visit the conference as part of their jobs.
  • the international OSM jet set consisting of relatively wealthy active OSM hobbyists who are able and willing to invest the money required to visit the conference from their own pockets.
  • members of the local communities near the place the conference takes place.

Everyone else, in particular local mappers and community members from elsewhere, is not realistically present at the conference – even if scholarships might add a few of those. No one should make the mistake of assuming the visitors of SotM or even the non-business part of them are even remotely representative for the global OSM community.

The main difficulty of planning the SotM conference seems to be balancing the interests of the three groups mentioned. Even before visiting the conference this year my opinion on this has been that emphasizing the weight of the third group and making sure to widely rotate the location of the conference would be the best approach – maybe even to the point of not organizing a separate international conference but instead every year hooking into a different regional conference and giving it special support during that year. But since of the three groups of people mentioned the third one is quite clearly the least influential and least powerful one i don’t have the illusion of this being likely to happen.

July 26, 2018
by chris

Milano and SotM

I will be on my way to Milano tomorrow for visiting the SotM 2018 conference.

SotM never had a particular appeal to me in the past years in terms of the program but it is likely not going to take place as close as this year any time soon (the journey to Milano from here via train takes about as long as to Hamburg – and you don’t even have to change trains) so it is a good opportunity to get a first hand impression. And i look forward to the opportunity to meet various people and talk about OpenStreetMap, cartography, geodata etc.

July 24, 2018
by chris

More new colors

I made some more high impact color changes to the alternative-colors style i want to quickly discuss here.

Farmland coloring

First i changed the farmland color. Farmland was is OSM-Carto rendered for a long time with a fairly dark brown tone. This looked odd in particular in contrast to the brighter urban landcovers. Since farmland covers pretty large areas in regions with intensive agricultural use a brighter color makes more sense. The color was therefore changed into something significantly brighter several years ago.

This was a big improvement but maintained the oddity of using a brown color for something vegetation related.

The problem is that in the bright color domain you have relatively little room for multiple distinct colors. Therefore the color had to be pretty strong to be discernible from the other bright colors which was something many people also disliked.

The solution i implemented now is essentially a color swap (plus some tuning) of the farmland color and the education/hospital (societal_amenities) color. This makes quite a lot of sense because of the similarity rule (that features similar in meaning and purpose should use similar colors and those different in meaning and purpose different colors). This is a bit of a disentanglement of area color use in the style.

farmland colors in different OSM map styles

the new farmland color – click to see on larger area

I had contemplated this change for quite a while already but originally was not so satisfied with the result. With some tuning and some time for getting used to it i now however think this works quite well.

Road colors

With the road colors i implemented what is essentially a shift of the road classes by one downwards and extrapolation of a new color for motorways at the top extending the existing scheme.

A bit of background for that as well: Back when the current road color scheme in OSM-Carto was developed there were essentially two major constraints:

  • The colors that could be used were the red-orange-yellow-white progression that had already been used for roads previously (plus the green and blue colors we wanted to stop using for roads). It was not possible to go beyond a red tone in hues since that would have led to confusion with the purple boundaries at low zoom levels.
  • The color differences between the individual classes had to be large enough to be able to reliably distinguish between them.

These constraints meant the number of distinct colors had to be reduced to five (red, dark orange, bright orange, yellow, white) and tertiary roads lost their distinct color.

With purple not being used for boundaries any more in the alternative-colors style i can lift the first constraint and extend the color palette to purple and move back to six road colors.

the road color scheme in OSM-Carto (top) and here (bottom)

Here is how this looks like practically at various zoom levels.

z14 – click for larger area

z13 – click for larger area




I also updated the low zoom rendering demo with the new road color scheme and updated data.

Update: Based on the remark by Ilya in the comments below i adjusted the color calculation script to limit the darkening of the motorway color. This makes motorways somewhat brighter than in the samples above, in particular at the lower zoom levels. You can see this in the samples in the readme and in the low zoom demo.