Imagico.de

blog

Landsat Winter Alaska 2017

November 15, 2017
by chris
0 comments

Into the light

I have a somewhat different satellite image than usual here:

This is a strip of nine Landsat scenes recording parts of Alaska in early Winter from a few days back. I rotated this to align roughly with the satellite recording direction and you need to scroll down to see the whole image. As you scoll down you move from the limit of the polar night at the northern end towards the southwest and towards the sun across about 1500km.

You will notice a slight bend in the image when doing that – this is because the image coordinate system is not actually aligned to the satellite orbit but a simple oblique mercator projection. Due to the satellite’s sun synchroneous orbit however the satellite ground path is not actually a great circle but kind of spirals around the earth following the sun.

The southern end of this image strip is defined by the end of the Landsat recording which does not extend over the open ocean (it is Landsat after all). The northern end is the limit of normal Landsat recordings at this time of year due to the low sun position.

Here a Sentinel-3 OLCI image from the same day (and this time with north up orientation – also allowing you to identify where exactly the first image is placed) showing a much tighter northern limit.

And for comparison here a false color infrared Sentinel-3 SLSTR image where no recording limit is imposed showing the actual limit of light – but of course not in natural colors.

The two Sentinel-3 images also show an impressive cloud of dust extending SSW from the delta of the Copper River in southern Alaska at the right side of the image. Here a larger view to show this better.

And finally two crops from the first image – the first one from the north showing how you can watch the rivers freezing over at this time of the year near Fort Yukon.

and the second from the south showing the indeed very windy yet sunny weather at Tugidak Island in the south.

November 10, 2017
by chris
0 comments

Satellite image news

Some news that might be of interest for some of my readers – without any attempt of completeness.

  • The Sentinel-2 package format has changedagain. This change is rather small and will not significantly affect most users. The interesting and funny thing however is the second time stamp saga from the previous change now seems to have gained another twist – now the second time stamp is specialised to ensure a deterministic repeatable name across time for the same product. In other words: It does not have a meaning any more, it is just there to be able to distinguish between several packages with different data but otherwise the same name (which can happen at data strip boundaries).
  • Another thing on Sentinel-2 – it has not been widely advertised but there is a data quality report on Sentinel-2 data updated in more or less regular intervals here. You need to be careful when reading this of course. Regular updates do not mean all information in that report is complete up-to-date. And you have to know how to interpret the information given. Take for example the absolute geolocation accuracy (which i have written about recently as well) – This can only be reliably measured for areas where you have accurate reference data – which does not usually include regions where accuracy tends to be bad. So the <11m at 95.5% confidence is likely not based on an unbiased set of reference locations. The reference locations are not published of course – neither is the source of the reference data used.
  • The USGS is starting to introduce what they call Landsat Analysis Ready Data. This essentially means Landsat imagery reprojected to a common coordinate system for the Unites States and distributed in tiled form. I am not going to review this data since i think this kind of product is conceptually and technically a dead end. It is by definition a regional data product they cannot extend to global coverage and performing double resampling (from the raw Level 0 data to the UTM grid of the orthorectified Level 1 product and then again to the Albers Equal Area projection of the ARD grid) is wasteful and unnecessary. There are obviously advantages for processing and using data in a common grid for larger regions but if the solution for that limits you to areas within the Unites States that is not really a universally usable approach.
  • In the field of commercial earth observation Planet Labs has launched six new SkySat satellites – those are the somewhat larger satellite systems from their acquisition of Terra Bella from Google. I briefly mentioned them in my discussion of Planet Labs some time ago. There is very little information publicly available on actual operation of these satellites. They claim a recording capacity of 185k km^2 per day for the whole fleet of 13 of these satellites. That is not much. With a recording swath width of 8km that amounts to less than 2000km recording length per day per satellite or about 20 seconds of recording per orbit. If this is to be increased in the future is unknown but at the moment it seems that these satellites – being positioned to record at different times of the day and together with a monochrome video recording ability – are mostly intended for what you might call event photography from space.
  • There are two upcoming launches of Earth observation satellites – for November 14 there is the planned launch of JPSS-1 which carries a second VIIRS instrument in addition to the one on Suomi NPP launched in 2011. And in late December there is the planned launch of GCOM-C. Both have been subject to delays – JPSS-1 was originally supposed to launch in 2016, GCOM-C in 2014.

I updated my satellite sensor chart accordingly. Note i still could not get myself to specify a full coverage interval for the PlantScope satellites. They now show a decent monthly coverage of >90 percent between -60 and 75 degrees latitude for the combination of RapidEye and PlantScope but full coverage means full coverage for me. And demo or it did not happen.

Sentinel-2 2017 coverage

November 1, 2017
by chris
0 comments

Satellite image aquisitions – yearly report

About a year ago i wrote my report on the first year acquisitions of Sentinel-2 as well as for Landsat on a matching time frame. This was – and still is to my knowledge – the most detailed and accurate analysis of image data available from these satellites. Here is an update of this for a time frame from October 2016 to October 2017.

The October division is meant to include exactly one summer season of both the northern and the southern hemisphere. A calender year based division would always split the southern hemisphere summer season.

Here is the plot for the overall recording volume of all satellites:

Landsat

Both Landsat satellites have operated during the last year without any notable incidents or interruptions of recordings. Landsat 7 had its last orbit maintainance maneuver in early 2017 and is now in a steadily declining orbit which means the recording time frame will move from the current about 10:15 to earlier times as it has happened for EO-1 previously.

Here are the coverage maps for Landsat 8 day time acquisitions:

The most notable difference to previous years is that Antarctic coverage was significantly reduced during the 2016-2017 summer (see the last year for comparison). You can see that in the line plot on top as a dip in the Landsat 8 line near the end of 2016 which differs significantly from the patterns of the previous years. To my knowledge there has so far not been a statement from the USGS as to why this change was made.

Otherwise not much has changed – we now get routine off-nadir acquisitions for northern Greenland and the Antarctic interior. In Greenland these always happen for the same path which means there is room for improvement by selecting the path dynamically based on weather in the target area. All 2017 northern Greenland off-nadir images are severely affected by clouds.

Also we still have the two one gap in land area coverage at lower latitudes – Rockall and Iony Island (Edit: noticed there is actually one image for Rockall – though not regular coverage. Iony Island is actually the more meaningful omission)

Sentinel-2

For Sentinel-2A we are looking at the second year of operations and this might lead to expectations of an increased level of routine and therefore reliability. We also get the first images from Sentinel-2B. Here are the numbers for Sentinel-2A and Sentinel-2B separately:

And here the combined numbers with a different color scale.

I should emphasize that these are the images publicly available. As pointed out already in a previous report there are significant differences between the published acquisition plans and the actual recordings and furthermore publication of images is frequently incomplete. Here an example from Sentinel-2B from my detailed statistics page (which i also updated to the current state).

I have not determined precise numbers but it is clear that the volume of both images planned but not recorded and recorded but not published is significant. Especially the latter, in particular in its arbitrariness shown in the image above, seems quite embarrassing.

The acquisition patterns are nearly the same as last year and also the same for Sentinel-2A and Sentinel-2B apparently. To summarize: Most of Europe and Africa as well as Greenland are recorded at every opportunity – which means a ten days intervals for each satellite, the rest of the larger land masses except Antarctica only at every second opportunity except for some seemingly arbitrary small special interest areas where also a ten days interval is recorded. Smaller islands are fully missing. Antarctica has been covered during the 2016-2017 summer but mostly at a much lower frequency than the rest of Earth.

Apart from the spatial distribution of acquisitions (which quite clearly is a conscious political choice) the most striking difference to Landsat is that high latitude acquisitions in Greenland and European Arctic islands are not reduced due to the naturally larger overlap between recording opportunities. In northern Greenland this leads during summer to frequently more than one image per day. While this can be nice for data users interested in those areas and is also kind of compensatory for the otherwise low focus on these regions it is fairly wasteful in terms of recording resources and probably results from blindly sticking to the rule record Europe and Greenland at every opportunity decided on by bureaucrats who have no clue what this actually means in practice.

Conclusions

So overall not that much has changed since last year – which i guess is good news for Landsat and less good news for Sentinel-2 since the latter is still subject to the same problems and limitations as last year. But maybe we just need a few more years to get used to these problems…

Apart from the problems already mentioned Sentinel-2 operations continue to be plagued by delays in data processing and other incidents. While for Landsat you can fairly reliably predict when the next image will be recorded for a certain place on earth and that it will be available a few hours afterwards for Sentinel-2 this is still much less the case.

With all the beating on Sentinel-2 problems it should however be mentioned that with two satellites now operating at a more or less constant level Sentinel-2 now usually offers a higher recording frequency than Landsat 8 (which is a practically sensible comparison since use of data from Landsat 7 is often fairly difficult due to the SLC gaps) – even in the lower priority areas – except for the small islands and Antarctica of course. In other words: if you look for the most recent image from a certain point on Earth it is more likely you find it in the Sentinel-2 archive than from Landsat 8 – despite the fact that delays in processing, missing recordings and missing publications put Sentinel-2 at a significant disadvantage.

And another positive thing about Sentinel-2 – Availability of the download infrastructure has improved a lot in the past months. Longer unscheduled downtimes where no downloads are possible at all are now fairly rare.

Here for reference all the recording visualizations for this and the previous years:

year day night day pixel coverage
2014 LS8, LS7 LS8 LS8
2015 LS8, LS7 LS8 LS8
2016 LS8, LS7 LS8 LS8, S2A
2017 LS8, LS7 LS8 LS8, S2A, S2B, S2 (both)

And also see the detailed recording patterns per orbital period and the daily recording numbers.

iceland_autumn2017_expose.ann

October 29, 2017
by chris
0 comments

Islands in Spring and Autumn

A few satellite image impressions from the last weeks showing islands in spring and autumn. First a view of southwest Iceland from just a few days ago:

Then a clear weather glimpse of South Georgia in spring – with a large iceberg to the northeast:

And finally an image of Onekotan Island in the northern Kuril Islands:

The first two are based on Copernicus Sentinel-2 data, the last is created from Landsat imagery.

carto-path_980

October 26, 2017
by chris
0 comments

Drawing the lines

After doing what i had originally planned for the last OSM Hack Weekend in Karlsruhe before the actual weekend what i actually worked on there was something different – though not unrelated.

Rendering lines in a map is something that at the first glance seems the simplest thing to do but in reality there are quite a number of things that need to be considered for lines in a map to be well readable. One thing in particular is that if you render a dashed or dotted line this is much more difficult to get right than a solid line.

The OSM standard style uses dashing to differentiate tracks by tracktype and footways/cycleways by surface. This works reasonably well at the high zoom levels but it degrades to the point of being completely unreadable as you zoom out in areas with a dense network of paths. Like in these examples:

Now you can try to vary the styling like by adding bright halos, increasing contrast or varying the line width but ultimately a dashed or dotted line always makes it more difficult to identify the paths as continuous lines in areas with a lot of detail. A fundamentally different and possibly better approach would be to only draw the most important ways at these scales. But for that you’d need an assessment of importance, which is not really something you can readily find in the data and which ultimately is quite subjective and likely would not be very intuitive in many situations. Some map users for example might find it helpful if only those paths are shown that are part of a long distance trail. A local map user might on the other hand consider a different path more important because it is the shortest, easiest and most frequently used connection between two villages in the area.

One solution for tracks and paths at z13/14 i had already quickly tested some time ago is to drop the dashing and use continuous lines at these scales. This severely limits the possibilities to distinguish between different classes of paths – you can essentially only use the line width and color to differentiate and at narrow line widths it becomes more and more difficult to distinguish different colors because all pixels contain a mixture of background and line color.

One thing that prevented implementing this approach was the fact that cycleways in the standard style are traditionally rendered in blue color and a solid blue line looks just too much like a water feature intuitively. The use of blue color for cycleways has always been a sore spot but attempts to change that in the past were always hampered by the lack of other options. In particular the use of purple for boundaries creates severe limitations. Since i got rid of the purple boundaries i have some more freedom in that matter now.

Finding the right balance in colors, line widths and – at the higher zoom levels – the dashing patterns is difficult but i think the results are quite agreeable. This modification puts a stronger emphasis on footways and cycleways in the map but that in my eyes is mainly compensation for the under-representation they have in the standard style at the moment.

At z13 all lines are solid, the tracks vary in width slightly to indicate the tracktype but this variation is not large enough to reliably identify the individual track types although you can usually distinguish grade1 from grade4. Footways and cycleways are the same color (red) which can be distinguished from the track brown in nearly all situations.

(same areas in the standard style: here, here and here)

Overall the map image is much clearer and less noisy. You can better identify individual tracks and paths and their routes and connections, in particular in densely mapped areas although you loose the ability to differentiate between different types in not so densely mapped areas.

At z14 styling is very similar, the line width variation for tracks is somewhat stronger and i start using dashing for tracks without tracktype indicating to the mapper that important information is missing here.

(same areas in the standard style: here, here and here)

At z15 a white casing is added like it is also done in the standard style. Tracks are the same as in the standard style but cycleways are purple now and both cycleways and footways are stronger and differentiate clearly by surface type with long dashing for paved, short dashing for unpaved and alternating long/short for unspecified surface.

(same areas in the standard style: here and here)

I also considered differentiating out a third class of paths. The standard style some time ago removed that but this leads to the somewhat peculiar situation that highway=path + foot=designated + bicycle=designated is shown in cycleway color while highway=path without foot or bicycle tags is shown in footway color. But unfortunately mapping is often very inconsistent in this matter so this would not necessarily improve usability that much. The meaning of the colors essentially is:

  • purple: usable by bike, usually also on foot
  • red: usable on foot, maybe also by bike

At higher zoom levels the line width is slowly increased just like for tracks and the dashing is also slightly enlarged for better readability.

The style modifications for this can be found here.

I hope this description gives a tiny bit of insight into how map style design works when you systematically analyze and address problems. The actual coding is not that much work but analyzing the map rendering and identifying the problems on the one hand and adjusting and testing the various parameters, observing how the results affect the map viewing experience and how the different colors interact with each other in different geographic settings at different latitudes and resulting scales on the other hand are those things that are hard work.

In case you wonder what you can do as a mapper to allow for better readable rendering of tracks/footways/cycleways:

  • tag tracktype and surface where you know it.
  • tag access restrictions, in particular foot=* and bicycle=* as they apply.
  • although not currently rendered further information, in particular width=*, smoothness=* and sac_scale=* could be used to better differentiate rendering.

Tracks, footways and cycleways are not the only place where the standard style uses dashing and also not the only place where this leads to problems. Other situations where this leads to problems are administrative boundaries and intermittent waterways. There are already some improvement in these areas as well in the alternative-colors style. Maybe i will write about this in a future post.

nz_980

October 19, 2017
by chris
0 comments

New Zealand mosaic and 3d views

I here introduce a new satellite image mosaic i produced of New Zealand.

This is based on Sentinel-2 images from 2015 to 2017 and otherwise shares many of the characteristics of my previous mosaics like the high level of cloud freeness, seamless ocean depiction and assembly with priority to snow minimum and vegetation maximum.

What’s new is there is a significant improvement to the atmosphere correction methodology which i here used for the first time on a larger project. This results in a more uniform and more balanced color rendering overall. It is also the first time i produced the matching vegetation map at the Sentinel-2 resolution of 10m.

Here a few sample crops, more can be found on the mosaic description page on services.imagico.de.



I also produced a few new 3d views based on this mosaic, here two examples:


More 3d views can be found in the catalog on services.imagico.de.

osm-name_980

October 12, 2017
by chris
0 comments

You name it – on representing geographic diversity in names

There has recently been some discussion in OpenStreetMap on names and labeling due to some people expressing the desire to abandon the geographically neutral labeling on the OpenStreetMap standard style. One of the things this discussion once again showed is a basic problem in the way names are recorded in OpenStreetMap which I here want to briefly discuss.

The OpenStreetMap naming system is based on the idea that features in the database can have a local name, the name predominantly used locally for the feature, as well as an arbitrary number of names in different languages, that is how non-locals or locals speaking a different language than most name it. The first is to be mapped in the name tag, the latter ones go into name:<language> tags where <language> usually it the two letter code of the language of the name. There are other name tags like alt_name (for an alternative local name) or old_name (for a historic name no more in active use).

The OpenStreetMap standard style renders the content of the name tag and this way is supposed to display the name locally used. This is one of the most characteristic aspects of the map and a highly visible demonstration of OpenStreetMap being based on local knowledge and valuing geographic and cultural diversity. That there are of course people who think it is more important to have another map (in addition to hundreds of commercial OSM based maps) where they can read the labels than at least a single map that can be read by any local mapper all over the world in their local area is obvious but this is not my topic here.

The problem with basing labels on a single name tag for the local name is that then local mappers are often in conflict between tagging the actual local name and tagging whatever they want to see on the map – which might be affected by the desire of uniformity in labeling or to make the map better readable for non-locals. As a result the name tag often contains compound strings containing names in multiple languages, in particular in regions where multiple languages are widely used by locals and there might not even be a single dominant local name.

Labels in several languages in Morocco

Labels in several languages in Korea

The solution to this problem would lie in dropping the illusion that there is always a single local name that can be verifiably mapped. Instead you would tag the names in the different languages as it is done currently and add a format string indicating what the common form of displaying the names of this feature locally is. Separating the multilingual name data from the information on local name use is the key here.

The format string would normally not have to be specified for every feature individually since typically all features in an area would use the same format string. Instead you would have the individual features inherit the format strings of the administrative units they are located in.

For example in case of Germany the admin_level 2 boundary relation (51477) would get something like language_format=$de – and there would be no need for further format strings locally except maybe for a few smaller areas with a local language or individual features with only a foreign language name. Switzerland (51701) would get language_format=$de/$fr/$it/$rm and the different Cantons would get different format strings depending on the locally used languages.

The key and syntax for the format string are just an example of course to illustrate the idea – those could be different.

I think the advantages of this concept are obvious:

  • The rules for the individual language name tags are much clearer and better defined so there is less room for arbitrariness resulting in more reliable data for the data user than from the name tag.
  • Any desire of the local mappers to get certain labels in the map would be articulated in the format strings and would not tint the actual name data.
  • The format string allows data users a lot more flexibility – it can be ignored, modified or replaced by a custom and globally constant format string or a more complex interpreting function with fallbacks, transliterations etc. Or data users can select if they want to use format strings on a per feature basis or only as inherited from the admin units.
  • The problem that different script variants are needed for the same Unicode characters in different languages (a.k.a. the Han unification problem) would be solved as well.
  • Using the individual language names as data source for labels instead of the separately tagged name tag allows for quality control of this data through the map – likely resulting in less errors and inconsistencies in the name data overall.
  • There would be an easy fallback during transition to this tagging system – if there is no valid format string or any of the languages in the format string is not tagged you could fall back to the legacy name tag.

But i will also mention the main disadvantages of this idea:

  • The data users do not get a hand drawn label string prepared by the mapper and ready to use but have to interpret more structured information in the form of individual names and format strings.
  • Allowing features to inherit the format string of administrative units will require spatial relationship tests which are too expensive to be done on the fly so this would need support from the OSM data converters, in particular those that are used for map rendering (like osm2pgsql, Imposm). This is not trivial, especially if you want to take into account that changing the format string of an administrative unit would potentially affect all named features within that unit.

Another possible point of critique is that the format string is non-verifiable. But obviously if the current name tag is verifiable so is the format string which just describes its structure in an abstract form.

alaska_autumn_crop1.ann

October 5, 2017
by chris
0 comments

Autumn colors 2017

It’s autumn and the leaves are starting to change colors – matching that here a few impressions from the autumn in the north from satellite perspective.

The first is from the Yukon River at the Alaska-Yukon border:

Here two magnified crops:

The second shows the southern slopes of the Verkhoyansk Range, Siberia around the Tompo River with early snow in the mountains. The area was also included in last year’s autumn colors mosaic.

Also for this two magnified crops:

Both of these images are based on Sentinel-2 data. The next image shows a late autumn view of western Svalbard around Isfjorden taken by Landsat 8. Despite the high latitude warm weather can last quite long into autumn in this area so snow which had already fallen in mid September thawed away again almost completely in this October 2 image.

And finally a larger area view of northwestern Canada based on Sentinel-3 OLCI data:

The high resolution are all available in the catalog on services.imagico.de now: Alaska/Yukon, Siberia and Svalbard.

lz_980

October 1, 2017
by chris
0 comments

On basic small scale landcover rendering

What i am introducing here is something i originally wanted to work on during this autumn’s OSM hack weekend but i made some good progress on the matter during a first preparatory look at things so i decided to go ahead with it before. If anyone is interested in the matter you can none the less come by at the hack weekend of course to talk about it.

In a way this is a followup to my work from last year on low zoom waterbody rendering which so far sadly has not found much widespread application, probably because it is a fairly strange and disturbing approach for a typical digital map designer and because i never bothered to put up a real demonstration. On a technical level what i introduce here is kind of an advancement of the work on waterbody rendering but i also combine it with some design ideas i had during the last months.

low zoom waterbody rendering from last year

Landcover mapping (i understand as such here the various kinds of areas mapped in OpenStreetMap based on either their physical surface characteristics or their primary human use – forests, farmland, builtup areas etc.) is a significant part of OpenStreetMap and quite a unique selling point of the project. Things like buildings, roads and addresses – while they exist in OSM – can also be obtained from other sources in many parts of the world in fairly good quality. Alternative landcover data available from outside OSM however usually is either old and outdated, based on automatic classification of satellite data which is often unreliable and cannot differentiate many differences or represents ought to be landuse as per local authorities instead of de facto characteristics.

Many OSM based maps show landuse areas at the high zoom levels in either a plain color or using patterns. At smaller scales landcover depiction is also useful in particular to delineate urban and rural areas and to allow the map user to identify different landscapes in particular if there is no relief depiction in the map. At small scales it is usually not the specific shape of individual landcover areas that needs to be shown but the overall distribution of the different landcover types. And due to the variable scale of the mercator projection certain needs for landcover depiction occur at different zoom levels depending on where on earth you look.

Based on these needs for plain color landcover rendering you have several options as you zoom out from the higher zoom levels:

  1. you can drop individual landcover classes. This is what the OSM standard style does for a long time. Water areas and glaciers start at z6, forests at z8 and most other landcovers at z10. This is highly problematic because of the geographic bias inherent in these decisions and because it does not necessarily increase readability – especially if you keep the locally dominant landcover types.
  2. you can fade the colors (preferably in a color neutral and uniform way – not like OSM-Carto does recently) – i would say that is the cartography equivalent to give up and use tables.
  3. you can perform geometric generalization of some form to the landcover shapes. This is hard to do in a way that looks good, especially for the lower zoom levels and if you have a lot of different landcover classes and it is always fairly subjective and therefore inevitably quite specific to a certain map use. See the following example for a rudimentary rendering of generalized builtup areas and forests as well as waterbodies.

Example for generalized landcover data

  1. you can keep rendering the landcovers as on the higher zoom levels.
  2. you can aggregate landcover classes into a smaller set of classes and show those at the low zoom levels.

The last two options are the ones i am going to demonstrate here. These options do not really exist if you render polygons with Mapnik or similar renderers since at successively lower zoom levels you increasingly run into more problems with performance and rendering artefacts (as i discussed a year ago with respect to water area rendering). So while you can get something out of Mapnik & Co. in such situations it will not actually really be a visualization of the map data but the abstract result of an algorithm used for something different than what it is supposed to be used for.

The demo i want to show here renders the landcover and water areas separately using a custom renderer and combines them with the conventionally rendered rest of the map. This is not directly possible because certain techniques used by the OSM standard style rely on the landcover layers being present. Therefore i had to do some modifications, in particular moving to preprocessed boundary data – which also allowed me to stop using Natural Earth boundaries at the lowest zoom levels and to get rid of the bogus boundaries at the 180 degree meridian.

OSM-Carto landcover and water colors alone at z9

The map shows zoom levels 1 to 9. From z10 upwards the standard style renders most of the landcovers although the current version uses the ugly color fading of course – if you want a version without the fading look at that from Geofabrik. For the alternative-colors style i currently have no higher zoom level demo.

Quite a bit could be said about the differences in the alternative-colors style but that is outside the scope here – maybe something for a future post. For the landcover it extensively implements the idea of aggregating classes as you zoom out. Here an illustration of the color aggregation scheme:

Aggregation scheme for the landcover classes

At z9 and below there are four land colors (in addition to the base land color without a mapped landcover or course) plus two glacier and three water colors. You can add to that the tidalflat color which is rendered as part of the overlay. Aggregation is of course a subjective choice and what landcover classes are included in what aggregate class is not always an easy decision. I put cemeteries into low vegetation for example although there are many cemeteries that are either largely covered with tall vegetation or vegetation free.

Since the map is rendered into separate layers you can switch on and off independently you can compare the different style variants with and without landcover rendering.

Landcover and water area layers for the alternative-colors style

With linework and labels

I also included a fallback landcover layer in the alternative-colors styling based on the Green Marble vegetation map. This of course does not completely match the definition of the aggregate landcover classes the OSM data is drawn in but it is pretty close. Where landcover mapping in OSM has gaps this layer can be used to supplement the rendering leading to globally more uniform results less dependent on the actual mapping completeness in OSM. You could interpret this as kind of a what if view for a hypothetical future perfect OSM database. Note however landcover mapping in OpenStreetMap is not based on the notion that ever square meter of the earth surface is supposed to be mapped and classified in some form – let alone that what is mapped makes sense to be rendered in a map like this.

Normal rendering exclusively based on OSM data

With fallback layer

I have not yet discussed the technical side but as said this kind of rendering cannot be produced internally with Mapnik or similar tools. The landcover and water areas are rendered using a supersampling approach which i discussed in more depth already with the waterbodies. This technique is kind of a counterpart to conventional rendering like in Mapnik – it works very well for those tasks that are prohibitively hard or impossible with Mapnik though it does not perform that well for things Mapnik is good at. The other nice thing is that the expensive part of the rendering process, producing the sample cache, is generic, i.e. independent of the actual map style with the colors and also independent of the zoom level. The two different landcover and water color schemes shown are produced from the same base rendering for all zoom levels. I have no code to publish here at the moment since the implementation is rather rudimentary without any real interface to define the styling parameters. One technical aspect i should also mention is that since the landcover data is processed with Osmium there are a number of broken geometries missing you would normally have in a rendering from an osm2pgsql database.

Regarding the demo map – ultimately this is certainly not a great map for most applications by any measure – for this it – like the OSM standard style it is based on – tries to do too many things at once. But it is meant to demonstrate approaches 4 and 5 in the list above in a solid quality implementation. My own opinion is that it beats approaches 1 and 2 hands down, especially if you view this in terms of mapper feedback and geographic neutrality but that is for the readers to decide for themselves. It certainly beats any trickery trying to implement 4 or 5 using Mapnik.

To view the demo click on any of the examples shown above and it will take you to the map with the layer configuration shown in the example. Because the map is composed from several semi-transparent layers it will be significantly slower to load than other maps of course.

September 20, 2017
by chris
0 comments

Survey on organized editing in OpenStreetMap

The data working group of the OSMF has started a survey on the subject of organized editing in OpenStreetMap.

The survey is only the first step in the process of a possible regulation of organized editing activities and it will only provide a view of the opinions within the OSM community and not directly lead to decisions. But the idea of developing a policy on such activities is a pretty important step.

In general OpenStreetMap has very few formal rules on how contributors can contribute to the database. There is a set of overall editing principles described as how we map and good practice – but these are more a general constitution of the project than specific practical laws to be followed and they are more principles how data entered is supposed to look like and less about how to enter the data. The only firm procedural rule that existed from the beginning was to requirement to not copy from other maps and to only use permissible sources for mapping.

This pretty anarchic framework works amazingly well with mapping being a largely self organized activity. Data users of course frequently complain about the lack of consistency in the data but stricter procedural rules on the mapping process would not necessarily have much effect on that. The amazing thing is not only that it works, it also works in a way that is in principle more globally egalitarian and culturally unbiased than any framework of rules imposed from outside can be. Mappers from a European city in principle have exactly the same freedom to map anything they can verifiably observe on the ground using any tags they see fit as people from a small village in a remote corner of the world. And if they meet somewhere in terms of mapping (i.e. they map in the same area on the same features) they do so on equal level. Of course there are still technological and linguistic barriers but at least there are no procedural rules that create additional discrimination.

This self organized framework however only works as long as mappers continue to work together based on the principles of self-organization. If mappers organize themselves outside the project and then map together in OpenStreetMap as an organized group an individual mapper can no more interact with such a group of mappers in the same way as with an individual and self organization breaks down. This is the reason why regulation of organized editing activities is something that is considered necessary by many mappers and why such a regulation is now being investigated by the DWG.

I would encourage everyone who has participated in OpenStreetMap or is using OpenStreetMap data to participate in this survey.

delong_s2_980

September 6, 2017
by chris
0 comments

Once more on positional accuracy of satellite images

I wrote about the problem of positional accuracy of satellite images before on several occasions. Working with images from the Arctic for the update of the Franz Josef Land map again i realized a few things i wanted to share here.

The common assumption with high resolution open data images is that positional accuracy is usually fine but not great. But the errors as specified by the image producers are usually 90 percent limits and therefore not really that meaningful. This is a complicated matter in general and there are a lot of different sources of errors that play in here. The satellite image producers (in this case the USGS and ESA) are working on improving the positional accuracy of their images but the focus is mostly on reducing the relative errors between different images of the same area (allowing for better processing of time series) and not so much on the absolute accuracy.

The most widely neglected aspect of positional accuracy is the quality of the relief data used for terrain correction. The reasoning is typically that

  • a certain elevation data error generally translates into a significantly smaller error in image position for small field of view nadir looking satellites like Landsat or Sentinel-2.
  • the most relevant parts of the earth surface, i.e. densely populated urban areas, are in flat regions where relief data errors are not a big practical problem.
  • as long as relief data without big errors is available for about 90 percent of the earth land surface the rest does not really cause problems with the error statistics.

But if you work in parts of the world outside these 90 percent you can find some pretty noticeable errors. Sentinel-2 is in particular a good study case for this because (a) due to the relatively wide field of view it is fairly sensitive here and (b) it is known that ESA uses relief data from viewfinderpanoramas.org for high latitudes which can be studied and analyzed directly.

Normally the largest errors occur in mountain areas but the errors that are best to measure and analyze are those at the coast which occur when the relief data has a horizontal offset and the coast features elevation values significantly above sea level. Here two examples of fairly impressive relative differences of the coast position in image pairs, both in the order of 100m, in Sentinel-2 images. The first is from Bell Island in Franz Josef Land:

 

The second example is from Bennett Island in the De Long islands:

 

This second example is also telling because it indicates ESA is using an older version of the relief data not containing the more recent update which replaces the offset and low accuracy data for the De Long islands with more precise data i introduced here:

 

You can see that the northwest coast is correct and does not change between the different images because it is at sea level even in the offset relief data used while the south coast is wrong because of the non-zero elevation in the old relief data there (shown on the left in the second comparison).

The Bell Island image pair by the way is from four days apart in September 2016, the Bennett Island images are from the same day (August 25 this year) from Sentinel-2A and Sentinel-2B.