Imagico.de

blog

Northeastern Alps by Landsat 1 in 1972

April 20, 2018
by chris
0 comments

Happy Anniversary Landsat Open Data

Today ten years ago the USGS started opening the full Landsat image archive as open data. Although this was not the first release of satellite imagery as open data – a selection of Landsat images was already opened before and MODIS data was likewise already usable by everyone years before it can today still be without reservations called the historic decision that most strongly shaped the satellite image landscape of today.

Here one of the early Landsat images from the Landsat 1 Multispectral Scanner System from 1972. Since MSS did not record a blue spectral band this uses a synthetic blue channel estimated from the other spectral bands (i wrote about this in the context of ASTER images before).


April 11, 2018
by chris
0 comments

Codification of contact

This blog post discusses the idea of Codes of Conduct, these are documents regulating social interaction, in the OpenStreetMap project. I in particular want to focus on Codes of Conduct for non-virtual meetings, i.e. for events where people meet in person.

First a bit of background: Social interaction in a society is normally regulated by two different sets of rules

  • The social conventions – the non-codified standards of social interaction of a society, largely defined by tradition (we do things certain ways because our parents and grandparents have already done so) and often fairly specific in the details to the social class, subculture or even family.
  • The local justice system.

Normally following these two sets of rules in everyday life is something we can manage without much effort. But as indicated these are local rules. With cross cultural and international social interaction things become much more difficult. You are very likely to break social conventions in international social interaction and depend on tolerance and generosity of others in such cases. Because of this cross cultural international social interaction is usually characterized by careful and considerate actions and reactions in the attempt to find a common ground in terms of common social conventions. A significant part of this is also successfully managing failure of a working social interaction – the organized and respectful retreat from a failed attempt at such.

These mechanisms of cross cultural social interaction have developed over thousands of years. The world’s travel literature is full of stories and anecdotes about positive examples of such with eye level interaction and cultural exchange and negative examples with catastrophic failures sometimes leading to violent results as well as many cases of arrogance and narrow-mindedness leading to a peaceful but completely unbalanced interaction. If you know people well who have significant experience with eye level cross cultural interaction you can usually observe a distinct change in habitus and body language when they meet a person and realize a significant difference in social conventions. In cultures where balanced, peaceful interaction with other cultures is common (through trade and travel for example) these mechanisms often have found a place in the culture’s social conventions in form of certain rituals and procedures.

In my experience how well people can handle this also depends a lot on past experience in dealing with people following very different social conventions. For example people who have grown up in a rural area are often better at this because they tend to get exposed early in their life to the significantly different and contrasting social conventions of life in towns and cities while people growing up in a city – while they might routinely experience a larger variety in social conventions in their immediate environment (though usually through the anonymity of city life) they often never experience a similarly harsh contrast in those conventions until they have grown up.

And while today in a way we have more frequent cross cultural interaction than at any time in history due to real time international communication and relatively inexpensive travel opportunities most of this interaction tends to be highly asymmetrical and truly balanced eye level cross cultural social interaction has probably – relatively speaking – become a rare exception.

The corporate code of conduct

Codes of conduct were invented as an additional set of rules of social interaction in the context of corporations regulating interaction of corporations with their employees and among employees, being created by the corporate management and being contractually agreed upon by people.

Reasons for creating those are in particular

  • to avoid the sometimes unreliable nature of uncodified social conventions.
  • adjusting the social conventions of the ordinary employees (who might come from a significantly different social and cultural background) to those of the management for their convenience.
  • limiting some of the freedoms offered by the justice system and social conventions because they are not considered good for productivity.
  • avoiding conflicts due to differences in laws and social conventions of employees by imposing a uniform set of rules above them. This is in particular important for international corporations.

Practically a corporate code of conduct is typically meant to supersede social conventions and local laws. It can not normally contradict local laws (though there are quite a few cases where internal corporate rules are actually in conflict with legal requirements) but since code of conduct rules are usually more restrictive than general laws they practically form the relevant limits of accepted behavior.

If we now have the idea of creating a code of conduct for an international OpenStreetMap meeting – like the SOTM conference – we could have two potential goals with that based on what i explained above:

  • helping and supporting to engage in eye level cross cultural social interaction in the way i described above (i.e. careful and considerate interaction to try establishing a common ground in social conventions).
  • managing the event like a corporate event under a corporate code of conduct.

Now the SOTM CoC actually does neither of these. It does not provide any significant guidance how to perform cross cultural social interaction and it also lacks the clarity of rules and the goal orientation of a corporate code of conduct. Instead it comes closer to a third type which i would call a political code of conduct.

The political code of conduct

The political code of conduct is the result of the idea of a corporate code of conduct being adapted by and for social reform and social justice movements and organizations. The idea here is to – just like with a corporate code of conduct – essentially replace existing social conventions and laws (because they are considered injust) with a set of rules – in this case not designed to optimize productivity but to achieve certain political goals.

The political goals are not immediately obvious in the SOTM CoC since it has been toned down compared to the document it has been derived from.

Now i don’t want to judge the political ideas behind this but no matter what you think of them it should be clear that the resulting rules will primarily have the goal of implementing the political ideas (just like the corporate CoC wants to increase productivity). Most political CoCs are created in a culturally fairly homogeneous environment (like an organization of people with common social background and political goals). While there are occasionally translations of such documents in different languages i have never seen a CoC that has been designed with multilingual input and discussion.

All of this is highly problematic in the way that it does not allow people to freely seek and find an individual common ground in social conventions between them but imposes a certain set of social conventions from the top. No matter what the political motives for creating such rules are they always come from a specific cultural background and are imposed on the rest of a global and culturally diverse community in an act of cultural dominance.

What remains to be discussed is how a code of conduct could look like that is meant to help and support people to engage in cross cultural social interaction on eye level in the traditional way without a culturally biased rule set being imposed.

A culturally neutral code of conduct

Here an attempt at this. Since this is formulated in a certain language you can of course argue that it is not neutral anyway but i put quite a lot of effort into this not relying on a specific interpretation of language and the meaning of certain words but being based on general thoughts and ideas that just happen to be communicated here in a certain language.

Some might also think calling this a code of conduct is incorrect because it is so very different from most documents you see titled this way. I would use a quote from the CoC of the Chaos Communication Congress which pretty well describes the idea behind this draft as well:

This is not a CoC in the anglo-american sense of the word. It appeals to morality rather than trying to instill it.

The event you are participating in is visited by a large variety of people with very different cultural and social backgrounds as well as personal views, ideas and abilities. Experiencing this, getting to know such a large variety of very different people can be a very educative and enjoyable experience but also requires tolerance, curiosity and open-mindedness from the participants. If you are able and willing to bring this with you, you are very welcome to participate in the event. This document is meant to help you do that in a way that makes it a positive experience for all participants.

As guests and visitors of the event you are expected to conform with the local laws. You are also encouraged to familiarize yourself with the local customs and social conventions before and during the visit. This will help you during as well as outside the event.

When interacting with others at the event you need to expect and accept that other guests and visitors might have views, ideas and expectations very different to those you are familiar with. You are expected to be open-minded and tolerant towards such differences. We encourage you to reach out to, communicate and interact with others but when doing that you should be sensitive to them and to the possibility that your behaviour might make them uncomfortable.

We expect you to always treat others at the event with at least the same level of respect, tolerance and generosity as you expect and depend on others to extend to you. To accomplish this you should try to always put the goal of a friendly and open-minded interaction and the comfort of others with this interaction above your specific goals in it – like for example an argument or a discussion you are having. As a participant of the event you are required to be willing to adjust your behaviour in the interest of others and at the same time should as much as you can avoid requiring others to adjust their behaviour to you.

The above rules and suggestions should avoid misunderstandings and conflicts and help resolve smaller issues amicably in most situations. In case of more difficult conflicts when interacting with others you are encouraged to approach other participants of the event to mediate the conflict. If others approach you to help with conflicts try to mediate by attempting to help people find a common ground without actually engaging in the conflict yourself. If you are unable to do so or in cases of more serious conflicts you should approach the organizers of the event. Our aim in such a situation will be to help the parties and if necessary give specific instructions to them which you are required to follow. Such intervention will always try as much as possible to stay neutral and not take sides in the conflict.

If you are a proponent of corporate or political CoCs you most certainly will not like this because it follows a very different approach to the problem of cross cultural social interaction. In my opinion this approach is the only way to organize cross cultural interaction in a non-judgemental way that allows all people of a globally diverse community opportunity to express themselves and have the chance for cross cultural exchange. You can still argue that you do not actually need such a document or you might want to shorten it further compared to the above.

The most common argument of proponents of political CoCs is that the rules are meant to protect the weak from the strong, the marginalized from the dominant and are therefore justifiable. But that is in itself based on putting specific social conventions which lead to the perception of weak and strong, of marginalized and dominant above others and is therefore culturally biased. The language and the words used by the CoCs themselves to set the limits of acceptable behavior already imply the dominance of certain social conventions – which is why my draft above is mostly limited to suggestions meant to help people in their social interaction (i.e. being educative rather than normative) and explaining fundamental ethical principles instead of imposing specific rules that require familiarity with the language and the underlying social conventions to follow them.

Another thing that should be kept in mind is that obviously the local justice system has a special role in the whole thing. This is not that different if you have no or a different kind of CoC obviously. So the question where to have an international meeting is a question where the justice system of the place in question has quite an impact.

How about virtual places?

Now how about CoCs for digital communication channels and platforms? If you have a truly global international channel the same as above applies naturally. But most digital channels are at least language specific and in case of OpenStreetMap often further specific to certain countries or regions. Then you can think about documenting certain common social conventions for those. But you need to keep in mind that you cannot claim the channel or platform in question is open for or be representing the whole global OSM community any more.

TLDR: Engaging in eye level cross cultural social interaction in a careful and considerate way guided by basic and universal moral principles and dominated by tolerance and respect for the other side and a willingness to accept differences in social conventions even if they are inconvenient – like essentially countless generations before us have for thousands of years managed peaceful cross cultural contact in the past – is not the best way to do this, it is just the only way.

Design oriented generalization of open geodata

March 20, 2018
by chris
0 comments

FOSSGIS 2018 and talk announcement

Tomorrow once again the annual FOSSGIS conference starts – this year in Bonn. At the moment it looks like it is going to be pretty damn cold…

I am going to present a talk on Thursday afternoon about design oriented generalization of open geodata.

Here a preview of two sample map renderings i am going to show:

Update: Video of the talk is available on media.ccc.de and on youtube.

Archangelsk in Winter

February 28, 2018
by chris
0 comments

Northern European Winter

Here two Winter impressions from Northern Europe – the first is from Northern Russia showing the city of Archangelsk:

You can see the frozen Northern Dvina River and the likewise mostly frozen White Sea. Well visible in the low sun is also the smoke from the power plants in the area.

Here a magnified crop:

The second image is from northwestern Scotland:

This image does not only feature snow cover in the mountains but also shows a remarkable color combination with the dark green of forested areas contrasting with the brown colors of the non-wooded parts of the hills and mountains.

Both images are based on Sentinel-2 data and can be found in the image catalog on services.imagico.de.

Saunders Island, South Sandwich Islands by Sentinel-2A

February 23, 2018
by chris
0 comments

Satellite image news

Some news on open data satellite images:

I updated the satellite image coverage visualizations. Here is the matching coverage volume plot over time:

Open data satellite image coverage development

There are several important things you can see in that:

  • With Landsat 8 the USGS has for the second southern hemisphere summer season adopted a changed acquisition pattern (indicated by a drop in image volume around December/January) where Antarctic coverage is significantly reduced compared to previous years (see my yearly report for more details)
  • There have been significant fluctuations in the acquisition numbers of the Sentinel-2 satellites. Much of this is related to an inconsistent approach to the Antarctic here as well – with ESA sometimes acquiring Antarctic images with one of the satellites for a few weeks and then dropping it again. A consistent long term plan is not recognizable here.
  • In the last weeks Sentinel-2B acquisitions have been ramped up to full coverage in the nominal 10 day revisit interval (compared to the fairly arbitrary pattern with 10 days for Europe, Africa and Greenland, 20 days for the rest). See the sample of a 10 day coverage below. This is good news.
  • The problems with missing aquisitions and individual tiles are still the same as before as indicated by the orange areas in the visualizations.

Full 10 day coverage by Sentinel-2B in early 2018

Another thing that changed is that ESA seems to have made a smaller change to the Sentinel-2A acquisition pattern including the South Sandwich Islands now. Here an example of a rare nearly cloud free view of Saunders Island:

Saunders Island, South Sandwich Islands by Sentinel-2A

Interestingly this is limited to Sentinel-2A – Sentinel-2B so far has not acquired any South Sandwich Islands images. Like with the Antarctic there does not seem to be a consistent plan behind this which makes this very unrealiable for the data user and kind of another wasted opportunity of establishing Sentinel-2 as a reliable data source.

February 11, 2018
by chris
0 comments

On imitated problem solving

As many of you know for a few years now we have a new trend in remote sensing and cartography that is called Artificial Intelligence or Machine Learning. Like many similar hypes what is communicated about this technology is little based on hard facts and largely dominated by inflated marketing promises and wishful thinking. I here want to provide a bit of context to this which is often missing in discussion on the matter and which is important to understand when you consider the usefulness of such methods for cartographic purposes.

AI or Machine Learning technologies are nothing new, when i was at University these were already pretty established in information sciences. The name has been misleading from the beginning though since Intelligence and Learning implies an analogy to human intelligence and learning which does not really exist.

A good analogy to illustrate how these algorithms work is that of a young kid being mechanically trained: Imagine a young kid that has grown up with no exposure to a real world environment. This kid has learned basic human interaction and language but no significant experience in the larger world and society beyond this.

Now you start watching TV with that kid and every time there is a dog on screen you call out Oh, a dog and encourage the kid to follow your example. And after some time you let the kid continue on its own as a human dog detector.

This is pretty much what AI or Machine Learning technologies do – except of course that the underlying technological systems are still usually much less suited for this task than the human brain. But that is just a gradual difference and could be overcome with time.

The important thing to realize is that this is not how a human typically performs intellectual work.

To use an example closer to the domain of cartography – imagine the same scenario with the kid above with detecting buildings on satellite images. And now consider the same task being performed by a diligent and capable human, like the typical experienced OpenStreetMap mapper.

The trained kid has never seen a real world building from the outside. It has no mental image associated with the word building called out by its trainer except for what it sees on the satellite images.

Experienced OSM mappers however have an in depth knowledge of what a building is – both in the real world as well as in the abstract classification system of OpenStreetMap. If they see an empty swimming pool on an image they will be able to deduce that this is not a building due to the shadows – even if they have never seen a swimming pool before. This typical qualified human interpretation of an image is based on an in depth understanding of what is visible in the image connecting it to the huge base of real world experience a human typically has. This allows humans to solve specific problems they have never been confronted with specifically before based on knowledge of universal principles like logic and the laws of physics.

As already indicated in the title of this post in a way AI or Machine Learning are the imitation of problem solving in a cargo cult like fashion. Like the kid in the example above who has no understanding of what a dog or a building is beyond the training it receives and tries to imitate afterwards. This is also visible from the kind of funny errors you get from this kind of system – usually funny because they are stupid from the perspective of a human.

Those in decision making positions at companies like Facebook and Mapbox who try to push AI or Machine Learning into cartography (see here and here) are largely aware of these limitations. If they truly believed that AIs can replace human intelligence in mapping they would not try to push such methods into OSM, they would simply build their own geo-database using these methods free of the inconvenient rules and constraints of OSM. The reason why they push this into OSM is because on their own these methods are pretty useless for cartographic purposes. As illustrated above for principal reasons they produce pretty blatant and stupid errors and even if the error rate is low that usually ruins things for most application. What would you think of a map where one percent of the buildings are in the middle of a road or river or similar? Would you trust a self driving car that uses a road database where 0.1 percent of the roads lead into a lake or wall?

What Facebook & Co. hope for is that by pushing AI methods into OSM they can get the OSM community to clean up the errors their trained mechanical kids inevitably produce and thereby turn the practically pretty useless AI results into something of practical value – or, to put it more bluntly, to change OSM from being a map by the people for the people into a project of crowd sourced slave work for the corporate AI overlords.

If you follow my blog you know i am not at all opposed to automated data processing in cartography. I usually prefer analytical methods to AI based algorithms though because they produce better results in case of the problems i am dealing with. But one of the principles i try to follow strictly in that regard is never to base a process on manually post processing machine generated data. The big advantage of using fully automated methods is that you can scale them very well. But you immediately loose this advantages if you start introducing manual post processing because this does not scale in the same way. If you ignore this because crowd sourced work from the OSM community comes for free that indicates a pretty problematic and arrogant attitude towards this community. Computers should perform work for humans, not the other way round.

If you are into AI/machine learning and want OSM to profit from it there are a number of ways you can work towards this in a constructive way:

  • make your methods available as open source to the OSM community to use as they see fit.
  • share your experience using these methods by writing documentation and instructions.
  • make data like satellite image available under a license and in a form that is well suited for automated analysis. This is particular means:
    • without lossy compression artefacts
    • with proper radiometric calibration
    • with all spectral bands available
    • with complete metadata
  • develop methods that support mappers on solving practically relevant problems in their work rather than looking for ways to get mappers to fix the shortcomings of the results of your algorithms.

In other words: You should do exactly the opposite of what Facebook and Mapbox are doing in this field.

I want to close this post with a short remark regarding the question if we will in the future get to have machines that can perform intelligent work significantly beyond the level of a trained kid? The answer is: We already have that in the form of computer programs programmed to solve specific tasks. The superficial attractiveness of AI or Machine Learning comes from the promise that it can help you solve problems you might not understand well enough to be able to specifically program a computer to solve them. I don’t consider this something that is likely to happen in the foreseeable future because that would not just mean reproducing the complex life long learning process of an individual human being but also the millennia of cultural and technological evolution of the human society as a whole.

What is well possible though is that for everyday tasks we will in the future increasingly rely on this kind of imitated problem solving through AIs and this way loose the ability to analyze and solve these problems ourselves based on a deeper understanding in the way described above. If that happens we would obviously also loose the ability to recognize the difference between a superficial imitated solution and a real in depth solution of the problem. In the end then a building will simply be defined as that which the KI recognizes as a building.

Western Alps autumn colors 2017

January 27, 2018
by chris
4 Comments

Mapping imagery additions

Over the last days i added a number of images to the OSM images for mapping produced from Sentinel-2 and Landsat data.

There are three new images for the Antarctic:

McMurdo Bay area

This image covers the McMurdo Sound, McMurdo Dry Valleys and Ross island. Data is from February 2017 – end of summer, but with quite a bit of seasonal sea ice cover still present.

There is a lot that can be mapped from this image in terms of topography, glaciers and other things. It can also be used to properly locate features where you only have an approximate position from other data sources. If you compare the image with existing data in OSM you will also see that there is significant mismatch in many cases. Positional accuracy of the image – like the other Antarctic images – is good but not great. In mountainous areas at the edge of the image swath (here: in the northwest) errors can be more than 50m probably on occasion but otherwise will usually be less.

Bunger Hills

Another part of the East Antarctic coast. This requires a bit of experience to distinguish between permanent and non-permanent ice. But existing mapping in the area is poor so there is a lot of room for improvement.

Larsen C ice shelf edge

This is an image for updating the ice shelf edge after the iceberg calving in 2017. The current mapping in OSM here is very crude since based on low resolution images.

Western Alps autumn colors

And then there is another image which is more of an experiment. This is an autumn image from the western Alps that shows autumn colors and could be helpful for leaf_cycle mapping. I am not quite sure how well this works. You probably need some level of local knowledge to be able to interpret the colors correctly. The red-brown colored forested areas are usually deciduous broadleaved forest, in many cases beeches. Larches are more yellow in color and are often mixed with other types of trees which makes them more difficult to identify. Also the different types of trees change their colors at different times – also depending on the altitude – so a single image does not really cover everything and a solid local knowledge is probably important not to misinterpret the colors.

I would be interested in feedback about to what extent this image is useful for mapping leaf_cycle.

sval_3d_980

January 21, 2018
by chris
0 comments

On permanence in IT and cartography

Many on my readers probably have heard about the company Mapzen closing down. In that context the Mapzen CEO Randy Meech has published (or more precisely: re-published) a piece on volatileness and permanence in tech business which reminded me of a subject i had intended to write about here for some time.

When i started publishing 3d geovisualizations more than ten years ago these were unique both technically and design wise. By my standards today these early works were severely limited in various ways – both due to my lack of knowledge and experience on the matter and due to the limits in quality of available data and the severe limitations of computer hardware at that time. But at the same time they were in many ways miles ahead of what everyone else was producing in this field (and in some ways still are).

An early 2006 3d Earth visualization from me

Today, more that ten years after these early works, a lot has changed in both the quality of the results and in the underlying technology. But there are also elements that stayed almost the same, in particular the use of POV-Ray as the rendering engine.

A more recent view produced in 2015

Randy in his text contemplated about the oldest companies of the world and if you’d assemble a list of the oldest end user computer programs still in use POV-Ray would be pretty far up with its roots going back to 1987. Not as old as TeX but still quite remarkable.

What makes programs like TeX or POV-Ray prevail in a world where in both cases there has been – in parallel or subsequently – a multi-billion dollar industry established in a very different direction but in a way competing for the same tasks (typesetting text and producing 3d renderings respectively)?

The answer is that they are based on ideas that are timeless and radical in some way and they are none the less specifically developed for production use.

In case of POV-Ray the timeless, radical idea was backwards raytracing in purity. There were dozens of projects following that idea mostly in the 1990s in the field of computer science research but none of them was actually seriously developed for production use. There were also dozens of both open source and proprietary rendering engines being developed for production use making use of backwards rendering techniques but all of them diluted the pure backwards rendering idea because of the attractiveness of scanline rendering centered hardware accelerated 3d as it during that time dominated the commercially important gaming and movie industries.

Because POV-Ray was the only pure backwards renderer it was also the only renderer that could do direct rendering of implicit surfaces. Ryoichi Suzuki, who implemented this by the way indicated back in 2001 that this was based on an idea originally implemented 15 years ago which makes this over 30 years old now. The POV-Ray isosurface implementation is the basis of all my 3d Earth visualizations.

In the grand scheme of overall cultural and technological development ten years or 30 years are nothing of course. Eventually POV-Ray and my 3d map design work are almost certainly destined for oblivion. And maybe also the underlying timeless, radical ideas are not as timeless as i indicated. But what you can say with certainty is that the short term commercial success is no indicator for long term viability and significance of an idea for the advancement of society.

Going more specifically into cartography and map design technology – which most of my readers are probably more familiar with – companies like Mapbox/Google/Here/Esri etc. are focused on short term solutions for their momentanous business needs – just like most businesses looking into 3d rendering in the 1990s found in scanline rendering techniques and its implementation in specialized hardware a convenient and profitable way to do the low quality 3d we all know from this era’s computer games and movies.

Hardly anyone, at least no one in a position of power, at a company like Google or Mapbox has the long term vision of a Donald Knuth or an Eduard Imhof. This is not only because they cannot attract such people to work for them but primarily because that would be extremely dangerous for the short term business success.

Mapzen has always presented itself as if it was less oriented for short term business goals than other companies and maybe it was and this contributed to its demise. But at the same time they did not have the timeless and radical ideas and the energy and vision to pursue them to create something like TeX or POV-Ray that could define them and give them a long term advantage over the big players like Google or Mapbox. What they produced were overwhelmingly products following the same short term trends as the other players do in a lemming-like fashion. Not without specific innovative ideas for sure but nothing radical that would actually make it stand out.

Mapzen published a lot of their work as open source software and this way tries to make sure it lives on after the company closes. This is no guarantee however. There are tons on open source programs dozing away in the expanses of the net no one looks at or uses any more.

While open sourcing development work is commendable and important for innovation and progress – TeX and POV-Ray as individual programs would have never lasted this long if they had not been open source – it is important to notice that the deciding factor ultimately is if there is actually

  • a substantially innovative idea being put forward,
  • this idea being consequently developed to its real potential,
  • this idea being implemented and demonstrated in practical use,
  • the idea being shared and communicated publicly and
  • the idea brings substantial cultural or technological advancement over pre-existing and near future alternatives – which unfortunately can, if at all, usually only be determined in retrospect.
waterbody and ford rendering in the alternative-colors style

December 10, 2017
by chris
0 comments

Water under the bridge

When i wrote about rendering of footways/cycleways in OpenStreetMap based maps recently i indicated there are other changes i made in the alternative-colors style that deserve some more detailed explanation and here i am going to introduce some of them related to waterbody rendering.

Waterbodies in the standard style (and similarlys in nearly all other OSM based maps) have always been rendered in a relatively simple, not to say crude way. Every water related feature is drawn in the same color, water areas traditionally starting at z6, river lines at z8 and streams and smaller artificial waterways at z13. The z8 and z13 thresholds are so firmly established that mappers often decide how to tag waterways specifically to accommodate these thresholds. Since the smaller artificial waterways (ditch and drain) are rendered slightly thinner than streams these tags are frequently abused to map smaller natural waterways. The only significant styling specialty in this traditional framework is that the small waterways starting at z13 get a bright casing so they are better visible on dark backgrounds.

Some time ago a change was introduced to render intermittent waterways with a dashed line. While this seems like a logical styling decision it turned out to work rather badly because of the problems of dashed line styles in combination with detailed geometries as i already explained in context of the footway rendering.

This is the situation that forms the basis of the changes i am going to write about here.

Differentiating waterbody types

As indicated above traditionally the OSM standard style renders all water features in the same color. This color was changed some time ago but it is still one single color that is used for everything – from the ocean to the smallest streams and ditches.

This all one color scheme does not require mappers to think about how they map waterbodies specifically, they can just paint the map blue so to speak. In particular with water area tagging this has lead to a lot of arbitrariness and relatively low data quality in the more detailed, more specific information. As i pointed out in the context of waterbody data use the data cannot really be used for much else than for painting waterbodies in a uniform color. At the same time this makes life very easy for map designers of these relatively simple maps since you don’t have to worry about drawing order or other difficulties.

More specific information about waterbodies would however be very useful for data users so it makes sense to render it to encourage mappers to be more diligent with recording such information. And differentiating different types of waterbodies can help a lot creating a better readable map since what color and styling works best varies depending on the type of waterbody. And since blue color is widely reserved for water related features anyway differentiating by color is well possible.

The basic three types of waterbodies i am differentiating are:

  • the ocean
  • standing inland waterbodies (primarily lakes)
  • flowing water (both line and polygon features)

Water colors for ocean (left) standing inland water (middle) and flowing water (right)

This coloring scheme is also visible in the low zoom demo i showed recently.

Rivers use the strongest and darkest color so they are well visible even on strong and structured background while the ocean uses a brighter color not to be too dominating over land colors given that it covers a large area.

visibility of darker river color on dark background

differentiating standing and flowing water at the Rhine

In addition to differentiating by physical type of waterbody for line features i also distinguish between natural and artificial waterways in a relatively subtle form using a slightly brighter blue centerline at the higher zoom levels.

canal rendering with subtly brighter centerline

drain and stream rendering at z18 in comparison

Use of subtlety is of fundamental importance if you want to create a rich map that it still well readable. This distinction between natural and artificial waterways is strong enough to be clearly recognized by the keen observer but at the same time it is not adding a lot of noise that would affect the readability of the map otherwise.

Intermittency of waterbodies

current rendering in the standard style of intermittent rivers at z10

As indicated above the standard style already differentiates intermittent waterways but not in a very good way. I tested various options and ultimately came up with the following approach

  • intermittent waterways start one zoom level later and are slightly thinner than perennial ones at the first zoom levels.
  • at z12-z13 intermittent rivers get a bright color centerline. This is fairly well visible and works much better with detailed geometries than dashing. At z14 and above i use dashing for rivers but with very small gaps between the dashes so the line it still well visible as a continuous geometry. Streams, ditches and drains are rendered with a similar dashing from z13 upwards.
  • intermittent standing water areas get a blue grain pattern with a transparent base so underlying landcover rendering is visible.
  • intermittent flowing water areas get a bright grain pattern on a blue base starting at z14. This ensures the geometry outline is still well visible which is fairly important for readability in case of riverbanks.

intermittent waterway rendering at z13 with bright centerline for rivers and dashing for streams

intermittent riverbank polygons at z15 in combination with intermittent streams and rivers

intermittent lakes at z10

In addition for waterbodies with salt water (salt=yes) the ocean color is used in combination with a weak bright grain pattern. An abstract demo of all of these together here:

intermittent water rendering in the alternative-colors style at z14 – click to see the z15 version

Other changes

In addition to the more fundamental changes described above i also did a lot of tuning for the line widths and other rendering parameters for a more balanced relationship between the different feature types and a more continuous change in appearance when zooming in or out.

Fords

Not directly connected to the waterbody changes but still somewhat related – i added rendering of fords. These are shown in the standard style as POIs with an icon starting at z16 which is a fairly unfortunate way of rendering them because:

  • the icon covers the most interesting and most important area of the actual crossing.
  • the icon is rendered for anything that is tagged ford=yes – this can be a big highway or a small footway – or anything else for that matter where the ford tag does not make any sense.
  • z16 is way too late to be of help to the map user in many cases.

POI rendering of fords – a lot of visual noise carrying very little useful information

In other words: This kind of rendering in many situations does not really improve the map.

I used a different approach by rendering fords similar to bridges – after all a ford is a highway crossing a waterway without a bridge. The difficulty is that fords can be tagged on a node while bridges are by convention always mapped as ways. Rendering node based fords similar to bridges requires quite a bit of effort and i am afraid this significantly adds to the already complex road code. But i think the visual results make it worth it.

fords mapped as nodes for footways, tracks and minor roads

As you can see this is usually intuitively recognizable as a ford and the crossing geometry is not obscured by a big and distracting icon.

ford rendering at z15 for various highway types – click for z16 version

December 1, 2017
by chris
0 comments

OSMF board elections

The OpenStreetMap Foundation tomorrow is going to open board elections for this year’s Annual General Meeting for two seats on the OSMF board. If you are a member of the OSMF i would strongly urge you to vote. If not you might want to consider becoming a member (which however will not allow you to participate in these elections – for that you have to be a member a month before the elections).

The reason why this is of particular importance this year is because this year’s candidates for the positions on the board offer in parts fairly contrasting positions on the direction on the OSMF and the OpenStreetMap project in general. You can get an idea of the ideas and views of the candidates in the Q&A on the OSM wiki but you also need to read between the lines because candidates have partly picked up the bad habit from big politics of talking much without saying anything of substance. Sometimes the way how the candidates deal with questions they do not like is more revealing than the actual answers.

Of course replacing two of seven board members will not immediately change the whole OSMF but due to the quite contrasting views and backgrounds of the candidates it will be a significant message in terms of what direction the members support and this way will probably weigh significantly also on the other board members.

Of course even a fundamental change in direction of the OSMF would not necessarily have much influence on the OpenStreetMap project as a whole. One of the most remarkable aspects of OpenStreetMap is how little it depends on central organization and management. But of course if the OSMF and the OSM community start diverging significantly in goals and direction this could create a lot of friction.