Sentinel-2 2017 coverage

November 1, 2017
by chris

Satellite image aquisitions – yearly report

About a year ago i wrote my report on the first year acquisitions of Sentinel-2 as well as for Landsat on a matching time frame. This was – and still is to my knowledge – the most detailed and accurate analysis of image data available from these satellites. Here is an update of this for a time frame from October 2016 to October 2017.

The October division is meant to include exactly one summer season of both the northern and the southern hemisphere. A calender year based division would always split the southern hemisphere summer season.

Here is the plot for the overall recording volume of all satellites:


Both Landsat satellites have operated during the last year without any notable incidents or interruptions of recordings. Landsat 7 had its last orbit maintainance maneuver in early 2017 and is now in a steadily declining orbit which means the recording time frame will move from the current about 10:15 to earlier times as it has happened for EO-1 previously.

Here are the coverage maps for Landsat 8 day time acquisitions:

The most notable difference to previous years is that Antarctic coverage was significantly reduced during the 2016-2017 summer (see the last year for comparison). You can see that in the line plot on top as a dip in the Landsat 8 line near the end of 2016 which differs significantly from the patterns of the previous years. To my knowledge there has so far not been a statement from the USGS as to why this change was made.

Otherwise not much has changed – we now get routine off-nadir acquisitions for northern Greenland and the Antarctic interior. In Greenland these always happen for the same path which means there is room for improvement by selecting the path dynamically based on weather in the target area. All 2017 northern Greenland off-nadir images are severely affected by clouds.

Also we still have the two one gap in land area coverage at lower latitudes – Rockall and Iony Island (Edit: noticed there is actually one image for Rockall – though not regular coverage. Iony Island is actually the more meaningful omission)


For Sentinel-2A we are looking at the second year of operations and this might lead to expectations of an increased level of routine and therefore reliability. We also get the first images from Sentinel-2B. Here are the numbers for Sentinel-2A and Sentinel-2B separately:

And here the combined numbers with a different color scale.

I should emphasize that these are the images publicly available. As pointed out already in a previous report there are significant differences between the published acquisition plans and the actual recordings and furthermore publication of images is frequently incomplete. Here an example from Sentinel-2B from my detailed statistics page (which i also updated to the current state).

I have not determined precise numbers but it is clear that the volume of both images planned but not recorded and recorded but not published is significant. Especially the latter, in particular in its arbitrariness shown in the image above, seems quite embarrassing.

The acquisition patterns are nearly the same as last year and also the same for Sentinel-2A and Sentinel-2B apparently. To summarize: Most of Europe and Africa as well as Greenland are recorded at every opportunity – which means a ten days intervals for each satellite, the rest of the larger land masses except Antarctica only at every second opportunity except for some seemingly arbitrary small special interest areas where also a ten days interval is recorded. Smaller islands are fully missing. Antarctica has been covered during the 2016-2017 summer but mostly at a much lower frequency than the rest of Earth.

Apart from the spatial distribution of acquisitions (which quite clearly is a conscious political choice) the most striking difference to Landsat is that high latitude acquisitions in Greenland and European Arctic islands are not reduced due to the naturally larger overlap between recording opportunities. In northern Greenland this leads during summer to frequently more than one image per day. While this can be nice for data users interested in those areas and is also kind of compensatory for the otherwise low focus on these regions it is fairly wasteful in terms of recording resources and probably results from blindly sticking to the rule record Europe and Greenland at every opportunity decided on by bureaucrats who have no clue what this actually means in practice.


So overall not that much has changed since last year – which i guess is good news for Landsat and less good news for Sentinel-2 since the latter is still subject to the same problems and limitations as last year. But maybe we just need a few more years to get used to these problems…

Apart from the problems already mentioned Sentinel-2 operations continue to be plagued by delays in data processing and other incidents. While for Landsat you can fairly reliably predict when the next image will be recorded for a certain place on earth and that it will be available a few hours afterwards for Sentinel-2 this is still much less the case.

With all the beating on Sentinel-2 problems it should however be mentioned that with two satellites now operating at a more or less constant level Sentinel-2 now usually offers a higher recording frequency than Landsat 8 (which is a practically sensible comparison since use of data from Landsat 7 is often fairly difficult due to the SLC gaps) – even in the lower priority areas – except for the small islands and Antarctica of course. In other words: if you look for the most recent image from a certain point on Earth it is more likely you find it in the Sentinel-2 archive than from Landsat 8 – despite the fact that delays in processing, missing recordings and missing publications put Sentinel-2 at a significant disadvantage.

And another positive thing about Sentinel-2 – Availability of the download infrastructure has improved a lot in the past months. Longer unscheduled downtimes where no downloads are possible at all are now fairly rare.

Here for reference all the recording visualizations for this and the previous years:

year day night day pixel coverage
2014 LS8, LS7 LS8 LS8
2015 LS8, LS7 LS8 LS8
2016 LS8, LS7 LS8 LS8, S2A
2017 LS8, LS7 LS8 LS8, S2A, S2B, S2 (both)

And also see the detailed recording patterns per orbital period and the daily recording numbers.


October 29, 2017
by chris

Islands in Spring and Autumn

A few satellite image impressions from the last weeks showing islands in spring and autumn. First a view of southwest Iceland from just a few days ago:

Then a clear weather glimpse of South Georgia in spring – with a large iceberg to the northeast:

And finally an image of Onekotan Island in the northern Kuril Islands:

The first two are based on Copernicus Sentinel-2 data, the last is created from Landsat imagery.


October 26, 2017
by chris

Drawing the lines

After doing what i had originally planned for the last OSM Hack Weekend in Karlsruhe before the actual weekend what i actually worked on there was something different – though not unrelated.

Rendering lines in a map is something that at the first glance seems the simplest thing to do but in reality there are quite a number of things that need to be considered for lines in a map to be well readable. One thing in particular is that if you render a dashed or dotted line this is much more difficult to get right than a solid line.

The OSM standard style uses dashing to differentiate tracks by tracktype and footways/cycleways by surface. This works reasonably well at the high zoom levels but it degrades to the point of being completely unreadable as you zoom out in areas with a dense network of paths. Like in these examples:

Now you can try to vary the styling like by adding bright halos, increasing contrast or varying the line width but ultimately a dashed or dotted line always makes it more difficult to identify the paths as continuous lines in areas with a lot of detail. A fundamentally different and possibly better approach would be to only draw the most important ways at these scales. But for that you’d need an assessment of importance, which is not really something you can readily find in the data and which ultimately is quite subjective and likely would not be very intuitive in many situations. Some map users for example might find it helpful if only those paths are shown that are part of a long distance trail. A local map user might on the other hand consider a different path more important because it is the shortest, easiest and most frequently used connection between two villages in the area.

One solution for tracks and paths at z13/14 i had already quickly tested some time ago is to drop the dashing and use continuous lines at these scales. This severely limits the possibilities to distinguish between different classes of paths – you can essentially only use the line width and color to differentiate and at narrow line widths it becomes more and more difficult to distinguish different colors because all pixels contain a mixture of background and line color.

One thing that prevented implementing this approach was the fact that cycleways in the standard style are traditionally rendered in blue color and a solid blue line looks just too much like a water feature intuitively. The use of blue color for cycleways has always been a sore spot but attempts to change that in the past were always hampered by the lack of other options. In particular the use of purple for boundaries creates severe limitations. Since i got rid of the purple boundaries i have some more freedom in that matter now.

Finding the right balance in colors, line widths and – at the higher zoom levels – the dashing patterns is difficult but i think the results are quite agreeable. This modification puts a stronger emphasis on footways and cycleways in the map but that in my eyes is mainly compensation for the under-representation they have in the standard style at the moment.

At z13 all lines are solid, the tracks vary in width slightly to indicate the tracktype but this variation is not large enough to reliably identify the individual track types although you can usually distinguish grade1 from grade4. Footways and cycleways are the same color (red) which can be distinguished from the track brown in nearly all situations.

(same areas in the standard style: here, here and here)

Overall the map image is much clearer and less noisy. You can better identify individual tracks and paths and their routes and connections, in particular in densely mapped areas although you loose the ability to differentiate between different types in not so densely mapped areas.

At z14 styling is very similar, the line width variation for tracks is somewhat stronger and i start using dashing for tracks without tracktype indicating to the mapper that important information is missing here.

(same areas in the standard style: here, here and here)

At z15 a white casing is added like it is also done in the standard style. Tracks are the same as in the standard style but cycleways are purple now and both cycleways and footways are stronger and differentiate clearly by surface type with long dashing for paved, short dashing for unpaved and alternating long/short for unspecified surface.

(same areas in the standard style: here and here)

I also considered differentiating out a third class of paths. The standard style some time ago removed that but this leads to the somewhat peculiar situation that highway=path + foot=designated + bicycle=designated is shown in cycleway color while highway=path without foot or bicycle tags is shown in footway color. But unfortunately mapping is often very inconsistent in this matter so this would not necessarily improve usability that much. The meaning of the colors essentially is:

  • purple: usable by bike, usually also on foot
  • red: usable on foot, maybe also by bike

At higher zoom levels the line width is slowly increased just like for tracks and the dashing is also slightly enlarged for better readability.

The style modifications for this can be found here.

I hope this description gives a tiny bit of insight into how map style design works when you systematically analyze and address problems. The actual coding is not that much work but analyzing the map rendering and identifying the problems on the one hand and adjusting and testing the various parameters, observing how the results affect the map viewing experience and how the different colors interact with each other in different geographic settings at different latitudes and resulting scales on the other hand are those things that are hard work.

In case you wonder what you can do as a mapper to allow for better readable rendering of tracks/footways/cycleways:

  • tag tracktype and surface where you know it.
  • tag access restrictions, in particular foot=* and bicycle=* as they apply.
  • although not currently rendered further information, in particular width=*, smoothness=* and sac_scale=* could be used to better differentiate rendering.

Tracks, footways and cycleways are not the only place where the standard style uses dashing and also not the only place where this leads to problems. Other situations where this leads to problems are administrative boundaries and intermittent waterways. There are already some improvement in these areas as well in the alternative-colors style. Maybe i will write about this in a future post.


October 19, 2017
by chris

New Zealand mosaic and 3d views

I here introduce a new satellite image mosaic i produced of New Zealand.

This is based on Sentinel-2 images from 2015 to 2017 and otherwise shares many of the characteristics of my previous mosaics like the high level of cloud freeness, seamless ocean depiction and assembly with priority to snow minimum and vegetation maximum.

What’s new is there is a significant improvement to the atmosphere correction methodology which i here used for the first time on a larger project. This results in a more uniform and more balanced color rendering overall. It is also the first time i produced the matching vegetation map at the Sentinel-2 resolution of 10m.

Here a few sample crops, more can be found on the mosaic description page on

I also produced a few new 3d views based on this mosaic, here two examples:

More 3d views can be found in the catalog on


October 12, 2017
by chris

You name it – on representing geographic diversity in names

There has recently been some discussion in OpenStreetMap on names and labeling due to some people expressing the desire to abandon the geographically neutral labeling on the OpenStreetMap standard style. One of the things this discussion once again showed is a basic problem in the way names are recorded in OpenStreetMap which I here want to briefly discuss.

The OpenStreetMap naming system is based on the idea that features in the database can have a local name, the name predominantly used locally for the feature, as well as an arbitrary number of names in different languages, that is how non-locals or locals speaking a different language than most name it. The first is to be mapped in the name tag, the latter ones go into name:<language> tags where <language> usually it the two letter code of the language of the name. There are other name tags like alt_name (for an alternative local name) or old_name (for a historic name no more in active use).

The OpenStreetMap standard style renders the content of the name tag and this way is supposed to display the name locally used. This is one of the most characteristic aspects of the map and a highly visible demonstration of OpenStreetMap being based on local knowledge and valuing geographic and cultural diversity. That there are of course people who think it is more important to have another map (in addition to hundreds of commercial OSM based maps) where they can read the labels than at least a single map that can be read by any local mapper all over the world in their local area is obvious but this is not my topic here.

The problem with basing labels on a single name tag for the local name is that then local mappers are often in conflict between tagging the actual local name and tagging whatever they want to see on the map – which might be affected by the desire of uniformity in labeling or to make the map better readable for non-locals. As a result the name tag often contains compound strings containing names in multiple languages, in particular in regions where multiple languages are widely used by locals and there might not even be a single dominant local name.

Labels in several languages in Morocco

Labels in several languages in Korea

The solution to this problem would lie in dropping the illusion that there is always a single local name that can be verifiably mapped. Instead you would tag the names in the different languages as it is done currently and add a format string indicating what the common form of displaying the names of this feature locally is. Separating the multilingual name data from the information on local name use is the key here.

The format string would normally not have to be specified for every feature individually since typically all features in an area would use the same format string. Instead you would have the individual features inherit the format strings of the administrative units they are located in.

For example in case of Germany the admin_level 2 boundary relation (51477) would get something like language_format=$de – and there would be no need for further format strings locally except maybe for a few smaller areas with a local language or individual features with only a foreign language name. Switzerland (51701) would get language_format=$de/$fr/$it/$rm and the different Cantons would get different format strings depending on the locally used languages.

The key and syntax for the format string are just an example of course to illustrate the idea – those could be different.

I think the advantages of this concept are obvious:

  • The rules for the individual language name tags are much clearer and better defined so there is less room for arbitrariness resulting in more reliable data for the data user than from the name tag.
  • Any desire of the local mappers to get certain labels in the map would be articulated in the format strings and would not tint the actual name data.
  • The format string allows data users a lot more flexibility – it can be ignored, modified or replaced by a custom and globally constant format string or a more complex interpreting function with fallbacks, transliterations etc. Or data users can select if they want to use format strings on a per feature basis or only as inherited from the admin units.
  • The problem that different script variants are needed for the same Unicode characters in different languages (a.k.a. the Han unification problem) would be solved as well.
  • Using the individual language names as data source for labels instead of the separately tagged name tag allows for quality control of this data through the map – likely resulting in less errors and inconsistencies in the name data overall.
  • There would be an easy fallback during transition to this tagging system – if there is no valid format string or any of the languages in the format string is not tagged you could fall back to the legacy name tag.

But i will also mention the main disadvantages of this idea:

  • The data users do not get a hand drawn label string prepared by the mapper and ready to use but have to interpret more structured information in the form of individual names and format strings.
  • Allowing features to inherit the format string of administrative units will require spatial relationship tests which are too expensive to be done on the fly so this would need support from the OSM data converters, in particular those that are used for map rendering (like osm2pgsql, Imposm). This is not trivial, especially if you want to take into account that changing the format string of an administrative unit would potentially affect all named features within that unit.

Another possible point of critique is that the format string is non-verifiable. But obviously if the current name tag is verifiable so is the format string which just describes its structure in an abstract form.


October 5, 2017
by chris

Autumn colors 2017

It’s autumn and the leaves are starting to change colors – matching that here a few impressions from the autumn in the north from satellite perspective.

The first is from the Yukon River at the Alaska-Yukon border:

Here two magnified crops:

The second shows the southern slopes of the Verkhoyansk Range, Siberia around the Tompo River with early snow in the mountains. The area was also included in last year’s autumn colors mosaic.

Also for this two magnified crops:

Both of these images are based on Sentinel-2 data. The next image shows a late autumn view of western Svalbard around Isfjorden taken by Landsat 8. Despite the high latitude warm weather can last quite long into autumn in this area so snow which had already fallen in mid September thawed away again almost completely in this October 2 image.

And finally a larger area view of northwestern Canada based on Sentinel-3 OLCI data:

The high resolution are all available in the catalog on now: Alaska/Yukon, Siberia and Svalbard.


October 1, 2017
by chris
1 Comment

On basic small scale landcover rendering

What i am introducing here is something i originally wanted to work on during this autumn’s OSM hack weekend but i made some good progress on the matter during a first preparatory look at things so i decided to go ahead with it before. If anyone is interested in the matter you can none the less come by at the hack weekend of course to talk about it.

In a way this is a followup to my work from last year on low zoom waterbody rendering which so far sadly has not found much widespread application, probably because it is a fairly strange and disturbing approach for a typical digital map designer and because i never bothered to put up a real demonstration. On a technical level what i introduce here is kind of an advancement of the work on waterbody rendering but i also combine it with some design ideas i had during the last months.

low zoom waterbody rendering from last year

Landcover mapping (i understand as such here the various kinds of areas mapped in OpenStreetMap based on either their physical surface characteristics or their primary human use – forests, farmland, builtup areas etc.) is a significant part of OpenStreetMap and quite a unique selling point of the project. Things like buildings, roads and addresses – while they exist in OSM – can also be obtained from other sources in many parts of the world in fairly good quality. Alternative landcover data available from outside OSM however usually is either old and outdated, based on automatic classification of satellite data which is often unreliable and cannot differentiate many differences or represents ought to be landuse as per local authorities instead of de facto characteristics.

Many OSM based maps show landuse areas at the high zoom levels in either a plain color or using patterns. At smaller scales landcover depiction is also useful in particular to delineate urban and rural areas and to allow the map user to identify different landscapes in particular if there is no relief depiction in the map. At small scales it is usually not the specific shape of individual landcover areas that needs to be shown but the overall distribution of the different landcover types. And due to the variable scale of the mercator projection certain needs for landcover depiction occur at different zoom levels depending on where on earth you look.

Based on these needs for plain color landcover rendering you have several options as you zoom out from the higher zoom levels:

  1. you can drop individual landcover classes. This is what the OSM standard style does for a long time. Water areas and glaciers start at z6, forests at z8 and most other landcovers at z10. This is highly problematic because of the geographic bias inherent in these decisions and because it does not necessarily increase readability – especially if you keep the locally dominant landcover types.
  2. you can fade the colors (preferably in a color neutral and uniform way – not like OSM-Carto does recently) – i would say that is the cartography equivalent to give up and use tables.
  3. you can perform geometric generalization of some form to the landcover shapes. This is hard to do in a way that looks good, especially for the lower zoom levels and if you have a lot of different landcover classes and it is always fairly subjective and therefore inevitably quite specific to a certain map use. See the following example for a rudimentary rendering of generalized builtup areas and forests as well as waterbodies.

Example for generalized landcover data

  1. you can keep rendering the landcovers as on the higher zoom levels.
  2. you can aggregate landcover classes into a smaller set of classes and show those at the low zoom levels.

The last two options are the ones i am going to demonstrate here. These options do not really exist if you render polygons with Mapnik or similar renderers since at successively lower zoom levels you increasingly run into more problems with performance and rendering artefacts (as i discussed a year ago with respect to water area rendering). So while you can get something out of Mapnik & Co. in such situations it will not actually really be a visualization of the map data but the abstract result of an algorithm used for something different than what it is supposed to be used for.

The demo i want to show here renders the landcover and water areas separately using a custom renderer and combines them with the conventionally rendered rest of the map. This is not directly possible because certain techniques used by the OSM standard style rely on the landcover layers being present. Therefore i had to do some modifications, in particular moving to preprocessed boundary data – which also allowed me to stop using Natural Earth boundaries at the lowest zoom levels and to get rid of the bogus boundaries at the 180 degree meridian.

OSM-Carto landcover and water colors alone at z9

The map shows zoom levels 1 to 9. From z10 upwards the standard style renders most of the landcovers although the current version uses the ugly color fading of course – if you want a version without the fading look at that from Geofabrik. For the alternative-colors style i currently have no higher zoom level demo.

Quite a bit could be said about the differences in the alternative-colors style but that is outside the scope here – maybe something for a future post. For the landcover it extensively implements the idea of aggregating classes as you zoom out. Here an illustration of the color aggregation scheme:

Aggregation scheme for the landcover classes

At z9 and below there are four land colors (in addition to the base land color without a mapped landcover or course) plus two glacier and three water colors. You can add to that the tidalflat color which is rendered as part of the overlay. Aggregation is of course a subjective choice and what landcover classes are included in what aggregate class is not always an easy decision. I put cemeteries into low vegetation for example although there are many cemeteries that are either largely covered with tall vegetation or vegetation free.

Since the map is rendered into separate layers you can switch on and off independently you can compare the different style variants with and without landcover rendering.

Landcover and water area layers for the alternative-colors style

With linework and labels

I also included a fallback landcover layer in the alternative-colors styling based on the Green Marble vegetation map. This of course does not completely match the definition of the aggregate landcover classes the OSM data is drawn in but it is pretty close. Where landcover mapping in OSM has gaps this layer can be used to supplement the rendering leading to globally more uniform results less dependent on the actual mapping completeness in OSM. You could interpret this as kind of a what if view for a hypothetical future perfect OSM database. Note however landcover mapping in OpenStreetMap is not based on the notion that ever square meter of the earth surface is supposed to be mapped and classified in some form – let alone that what is mapped makes sense to be rendered in a map like this.

Normal rendering exclusively based on OSM data

With fallback layer

I have not yet discussed the technical side but as said this kind of rendering cannot be produced internally with Mapnik or similar tools. The landcover and water areas are rendered using a supersampling approach which i discussed in more depth already with the waterbodies. This technique is kind of a counterpart to conventional rendering like in Mapnik – it works very well for those tasks that are prohibitively hard or impossible with Mapnik though it does not perform that well for things Mapnik is good at. The other nice thing is that the expensive part of the rendering process, producing the sample cache, is generic, i.e. independent of the actual map style with the colors and also independent of the zoom level. The two different landcover and water color schemes shown are produced from the same base rendering for all zoom levels. I have no code to publish here at the moment since the implementation is rather rudimentary without any real interface to define the styling parameters. One technical aspect i should also mention is that since the landcover data is processed with Osmium there are a number of broken geometries missing you would normally have in a rendering from an osm2pgsql database.

Regarding the demo map – ultimately this is certainly not a great map for most applications by any measure – for this it – like the OSM standard style it is based on – tries to do too many things at once. But it is meant to demonstrate approaches 4 and 5 in the list above in a solid quality implementation. My own opinion is that it beats approaches 1 and 2 hands down, especially if you view this in terms of mapper feedback and geographic neutrality but that is for the readers to decide for themselves. It certainly beats any trickery trying to implement 4 or 5 using Mapnik.

To view the demo click on any of the examples shown above and it will take you to the map with the layer configuration shown in the example. Because the map is composed from several semi-transparent layers it will be significantly slower to load than other maps of course.

September 20, 2017
by chris

Survey on organized editing in OpenStreetMap

The data working group of the OSMF has started a survey on the subject of organized editing in OpenStreetMap.

The survey is only the first step in the process of a possible regulation of organized editing activities and it will only provide a view of the opinions within the OSM community and not directly lead to decisions. But the idea of developing a policy on such activities is a pretty important step.

In general OpenStreetMap has very few formal rules on how contributors can contribute to the database. There is a set of overall editing principles described as how we map and good practice – but these are more a general constitution of the project than specific practical laws to be followed and they are more principles how data entered is supposed to look like and less about how to enter the data. The only firm procedural rule that existed from the beginning was to requirement to not copy from other maps and to only use permissible sources for mapping.

This pretty anarchic framework works amazingly well with mapping being a largely self organized activity. Data users of course frequently complain about the lack of consistency in the data but stricter procedural rules on the mapping process would not necessarily have much effect on that. The amazing thing is not only that it works, it also works in a way that is in principle more globally egalitarian and culturally unbiased than any framework of rules imposed from outside can be. Mappers from a European city in principle have exactly the same freedom to map anything they can verifiably observe on the ground using any tags they see fit as people from a small village in a remote corner of the world. And if they meet somewhere in terms of mapping (i.e. they map in the same area on the same features) they do so on equal level. Of course there are still technological and linguistic barriers but at least there are no procedural rules that create additional discrimination.

This self organized framework however only works as long as mappers continue to work together based on the principles of self-organization. If mappers organize themselves outside the project and then map together in OpenStreetMap as an organized group an individual mapper can no more interact with such a group of mappers in the same way as with an individual and self organization breaks down. This is the reason why regulation of organized editing activities is something that is considered necessary by many mappers and why such a regulation is now being investigated by the DWG.

I would encourage everyone who has participated in OpenStreetMap or is using OpenStreetMap data to participate in this survey.


September 6, 2017
by chris

Once more on positional accuracy of satellite images

I wrote about the problem of positional accuracy of satellite images before on several occasions. Working with images from the Arctic for the update of the Franz Josef Land map again i realized a few things i wanted to share here.

The common assumption with high resolution open data images is that positional accuracy is usually fine but not great. But the errors as specified by the image producers are usually 90 percent limits and therefore not really that meaningful. This is a complicated matter in general and there are a lot of different sources of errors that play in here. The satellite image producers (in this case the USGS and ESA) are working on improving the positional accuracy of their images but the focus is mostly on reducing the relative errors between different images of the same area (allowing for better processing of time series) and not so much on the absolute accuracy.

The most widely neglected aspect of positional accuracy is the quality of the relief data used for terrain correction. The reasoning is typically that

  • a certain elevation data error generally translates into a significantly smaller error in image position for small field of view nadir looking satellites like Landsat or Sentinel-2.
  • the most relevant parts of the earth surface, i.e. densely populated urban areas, are in flat regions where relief data errors are not a big practical problem.
  • as long as relief data without big errors is available for about 90 percent of the earth land surface the rest does not really cause problems with the error statistics.

But if you work in parts of the world outside these 90 percent you can find some pretty noticeable errors. Sentinel-2 is in particular a good study case for this because (a) due to the relatively wide field of view it is fairly sensitive here and (b) it is known that ESA uses relief data from for high latitudes which can be studied and analyzed directly.

Normally the largest errors occur in mountain areas but the errors that are best to measure and analyze are those at the coast which occur when the relief data has a horizontal offset and the coast features elevation values significantly above sea level. Here two examples of fairly impressive relative differences of the coast position in image pairs, both in the order of 100m, in Sentinel-2 images. The first is from Bell Island in Franz Josef Land:


The second example is from Bennett Island in the De Long islands:


This second example is also telling because it indicates ESA is using an older version of the relief data not containing the more recent update which replaces the offset and low accuracy data for the De Long islands with more precise data i introduced here:


You can see that the northwest coast is correct and does not change between the different images because it is at sea level even in the offset relief data used while the south coast is wrong because of the non-zero elevation in the old relief data there (shown on the left in the second comparison).

The Bell Island image pair by the way is from four days apart in September 2016, the Bennett Island images are from the same day (August 25 this year) from Sentinel-2A and Sentinel-2B.


September 2, 2017
by chris

Franz Josef Land map update

I have updated the Franz Josef Land map with new data based on mostly 2016 Sentinel-2 imagery and also using a relief rendering primarily based on ArcticDEM data as a test case for the new data set.

Franz Josef Land is relatively well covered in ArcticDEM with only a relatively low fraction of data gaps but still contains a lot of artefacts that would severely ruin any map rendering based on the data directly. The artefact detection is only semi-automated, identifying data errors through analysis of the data alone is not completely reliable and requires manual review for good results. In addition to masking gaps and artefacts and filling those areas with a combination of other data sources and shape-from-shading i also compensated for the elevation calibration problems described in my general report on ArcticDEM and flattened the water areas.

ArcticDEM original data with artefacts

ArcticDEM processed

Since the map production process itself with label placement and generalization is fully automated updating the data basis and replacing the relief data source is fairly simple which is extremely useful in particular for a map of an area that changes as fast as this one.

The results can be found in the Franz Josef Land map.

If you would like to create custom maps based on this data or would like to use the data for other purposes you can also find it on


September 1, 2017
by chris

ArcticDEM elevation data set review

Some readers have already been waiting for this – i have now completed my initial review of the ArcticDEM elevation data set which is being produced and published in several stages during this year by the University of Minnesota and the US NGA.

As frequently with elevation data sources the impression you might get from what is being advertised about the data might lead to disappointment if you look into practically using the files but still this is an interesting and valuable source of data. For finding out about the chances and limitations of the data and what problems you might encounter using it read the full review.

I will more specifically cover a practical use case for the ArcticDEM data in the next post.


August 24, 2017
by chris

OpenStreetMap-Carto – a look into the future

This is a continuation of my previous post where i looked back at the developments during the last year of OpenStreetMap-Carto, the map style featured on In the first part i looked at recent changes in the project and here i want to try looking a bit into what the future might bring.

So what do all the recent developments mean for the future of OSM-Carto? I don’t really know. What concrete changes are made depends on what developers are interested in which is hard to predict. But it is likely that the changes in procedure i discussed in the first part are going to influence the incentives and motivations for contributions. What i think i can observe when looking at more recent changes in comparison to earlier changes from the last years from an esthetic point of view is a move towards a more naïve art like direction. This is a very interesting outlook which you could consider kind of matching the way mapping is performed in OpenStreetMap. However mapping in OSM – while not being usually based on specialized cartographic or geo-science knowledge and education – in most cases shows a high degree of sophistication and knowledge and cannot really be considered a cartographic equivalent of naïve art.

Should OSM-Carto indeed move more into a naïve art like direction it will certainly struggle with the high degree of complexity and sophistication both of its own legacy and in the OpenStreetMap data itself. How this can work out is an interesting question. On a more technical level this also relates to a problem i pointed out about a year ago.

In the past OSM-Carto has often been vanguard with respect to design in interactive digital maps. Paul Norman recently pointed out several examples for this in his SotM talk on OSM-Carto (second half of the video), in particular polygon size based label scaling, systematic and automated color selection based on perceptual color spaces and multilingual labels. From my own work i could add to that the introduction of program generated randomized periodic patterns which was essentially non-existent in digital cartography before being introduced in OSM-Carto. I predict this kind of innovation will be seen less in the future because development is becoming more focused on localized changes with a relatively low level of innovation and coordination combined with trial-and-error changes like (metaphorically) moving stuff around to test if it looks better that way without a deeper understanding why certain things work and others don’t. This could also lead to the map focus moving more towards the mainstream of OSM based maps as offered by many commercial providers. If the project can continue to attract contributors with the background and vision to develop innovative solutions to the big problems and provide them a supportive environment is something that will remain to be seen.

The most critical point i see with the future development of OSM-Carto is if it can still positively fulfill its function as a feedback tool for mappers encouraging correct and accurate mapping. Anticipating how mappers will react to their data being rendered in a certain way in the map is one of the most difficult tasks for an OSM-Carto developer and getting this wrong can in the long term produce a lot of damage. No matter in what direction the style will steer design wise this is the thing to keep a close eye on. Although useful and constructive feedback for mappers is one of the documented goals of the style now there is no policy or procedural mechanism that ensures changes do not mess with this.

My recommendations for those who care about the public face of OSM and OSM-Carto is to

  • As a mapper resist the urge to adjust your mapping work for its use in the map – either directly as mapping for the renderer or indirectly by mapping or tagging things in a way you believe will make it easier to produce a good looking map for the style developers as i wrote about recently. Instead base your mapping on verifiable observations on the ground. If how OSM-Carto renders certain things you map looks strange or seems to suggest to map things differently also take a look at other map styles and how they render this.
  • Contribute to style development by making pull requests with actual changes. As a non-maintainer you cannot directly make changes but you now only need to convince one maintainer of your change to get it into the map.
  • Hold your maintainers accountable for their work. As explained the style maintainers now have more autonomy in making decisions but this also means resposibility for the changes made. This only works though if map users also provide feedback on changes made. Keep in mind however that just stating you do not like a certain change is not very helpful and convincing. If you see problems with a change in the style that are not a clear technical bug it is usually best to try explaining the problem in light of the documented goals of the style.
  • Develop alternative map styles. From my perspective this is the most important point of all. If OSM-Carto is without alternatives there is much less incentive to substantially improve it than if there are other styles with similar goals but with different design approaches. While there are of course countless general purpose OSM based map styles (only few of them under an open license of course) and also a number of derivatives of OSM-Carto from local communities like the French and German OSM styles most of these concentrate on goals that are fundamentally different from those of OSM-Carto. There are also some style variants being developed as personal projects by OSM community members – like here. What i would really like to see in the next few years is at least about a handful of independent map styles being offered on for display each being an honest attempt in providing a good map to the OSM community even if with less manpower than OSM-Carto. Considering how much maps influence the image of OpenStreetMap both from the outside and from the inside through the mappers i think such a development would be most productive and enabling for the OSM-Community.