Imagico.de

blog

September 20, 2017
by chris
0 comments

Survey on organized editing in OpenStreetMap

The data working group of the OSMF has started a survey on the subject of organized editing in OpenStreetMap.

The survey is only the first step in the process of a possible regulation of organized editing activities and it will only provide a view of the opinions within the OSM community and not directly lead to decisions. But the idea of developing a policy on such activities is a pretty important step.

In general OpenStreetMap has very few formal rules on how contributors can contribute to the database. There is a set of overall editing principles described as how we map and good practice – but these are more a general constitution of the project than specific practical laws to be followed and they are more principles how data entered is supposed to look like and less about how to enter the data. The only firm procedural rule that existed from the beginning was to requirement to not copy from other maps and to only use permissible sources for mapping.

This pretty anarchic framework works amazingly well with mapping being a largely self organized activity. Data users of course frequently complain about the lack of consistency in the data but stricter procedural rules on the mapping process would not necessarily have much effect on that. The amazing thing is not only that it works, it also works in a way that is in principle more globally egalitarian and culturally unbiased than any framework of rules imposed from outside can be. Mappers from a European city in principle have exactly the same freedom to map anything they can verifiably observe on the ground using any tags they see fit as people from a small village in a remote corner of the world. And if they meet somewhere in terms of mapping (i.e. they map in the same area on the same features) they do so on equal level. Of course there are still technological and linguistic barriers but at least there are no procedural rules that create additional discrimination.

This self organized framework however only works as long as mappers continue to work together based on the principles of self-organization. If mappers organize themselves outside the project and then map together in OpenStreetMap as an organized group an individual mapper can no more interact with such a group of mappers in the same way as with an individual and self organization breaks down. This is the reason why regulation of organized editing activities is something that is considered necessary by many mappers and why such a regulation is now being investigated by the DWG.

I would encourage everyone who has participated in OpenStreetMap or is using OpenStreetMap data to participate in this survey.

delong_s2_980

September 6, 2017
by chris
0 comments

Once more on positional accuracy of satellite images

I wrote about the problem of positional accuracy of satellite images before on several occasions. Working with images from the Arctic for the update of the Franz Josef Land map again i realized a few things i wanted to share here.

The common assumption with high resolution open data images is that positional accuracy is usually fine but not great. But the errors as specified by the image producers are usually 90 percent limits and therefore not really that meaningful. This is a complicated matter in general and there are a lot of different sources of errors that play in here. The satellite image producers (in this case the USGS and ESA) are working on improving the positional accuracy of their images but the focus is mostly on reducing the relative errors between different images of the same area (allowing for better processing of time series) and not so much on the absolute accuracy.

The most widely neglected aspect of positional accuracy is the quality of the relief data used for terrain correction. The reasoning is typically that

  • a certain elevation data error generally translates into a significantly smaller error in image position for small field of view nadir looking satellites like Landsat or Sentinel-2.
  • the most relevant parts of the earth surface, i.e. densely populated urban areas, are in flat regions where relief data errors are not a big practical problem.
  • as long as relief data without big errors is available for about 90 percent of the earth land surface the rest does not really cause problems with the error statistics.

But if you work in parts of the world outside these 90 percent you can find some pretty noticeable errors. Sentinel-2 is in particular a good study case for this because (a) due to the relatively wide field of view it is fairly sensitive here and (b) it is known that ESA uses relief data from viewfinderpanoramas.org for high latitudes which can be studied and analyzed directly.

Normally the largest errors occur in mountain areas but the errors that are best to measure and analyze are those at the coast which occur when the relief data has a horizontal offset and the coast features elevation values significantly above sea level. Here two examples of fairly impressive relative differences of the coast position in image pairs, both in the order of 100m, in Sentinel-2 images. The first is from Bell Island in Franz Josef Land:

 

The second example is from Bennett Island in the De Long islands:

 

This second example is also telling because it indicates ESA is using an older version of the relief data not containing the more recent update which replaces the offset and low accuracy data for the De Long islands with more precise data i introduced here:

 

You can see that the northwest coast is correct and does not change between the different images because it is at sea level even in the offset relief data used while the south coast is wrong because of the non-zero elevation in the old relief data there (shown on the left in the second comparison).

The Bell Island image pair by the way is from four days apart in September 2016, the Bennett Island images are from the same day (August 25 this year) from Sentinel-2A and Sentinel-2B.

fj_map_980

September 2, 2017
by chris
0 comments

Franz Josef Land map update

I have updated the Franz Josef Land map with new data based on mostly 2016 Sentinel-2 imagery and also using a relief rendering primarily based on ArcticDEM data as a test case for the new data set.

Franz Josef Land is relatively well covered in ArcticDEM with only a relatively low fraction of data gaps but still contains a lot of artefacts that would severely ruin any map rendering based on the data directly. The artefact detection is only semi-automated, identifying data errors through analysis of the data alone is not completely reliable and requires manual review for good results. In addition to masking gaps and artefacts and filling those areas with a combination of other data sources and shape-from-shading i also compensated for the elevation calibration problems described in my general report on ArcticDEM and flattened the water areas.

ArcticDEM original data with artefacts

ArcticDEM processed

Since the map production process itself with label placement and generalization is fully automated updating the data basis and replacing the relief data source is fairly simple which is extremely useful in particular for a map of an area that changes as fast as this one.

The results can be found in the Franz Josef Land map.

If you would like to create custom maps based on this data or would like to use the data for other purposes you can also find it on services.imagico.de.

arcticdem_980

September 1, 2017
by chris
0 comments

ArcticDEM elevation data set review

Some readers have already been waiting for this – i have now completed my initial review of the ArcticDEM elevation data set which is being produced and published in several stages during this year by the University of Minnesota and the US NGA.

As frequently with elevation data sources the impression you might get from what is being advertised about the data might lead to disappointment if you look into practically using the files but still this is an interesting and valuable source of data. For finding out about the chances and limitations of the data and what problems you might encounter using it read the full review.

I will more specifically cover a practical use case for the ArcticDEM data in the next post.

osm-carto

August 24, 2017
by chris
0 comments

OpenStreetMap-Carto – a look into the future

This is a continuation of my previous post where i looked back at the developments during the last year of OpenStreetMap-Carto, the map style featured on openstreetmap.org. In the first part i looked at recent changes in the project and here i want to try looking a bit into what the future might bring.

So what do all the recent developments mean for the future of OSM-Carto? I don’t really know. What concrete changes are made depends on what developers are interested in which is hard to predict. But it is likely that the changes in procedure i discussed in the first part are going to influence the incentives and motivations for contributions. What i think i can observe when looking at more recent changes in comparison to earlier changes from the last years from an esthetic point of view is a move towards a more naïve art like direction. This is a very interesting outlook which you could consider kind of matching the way mapping is performed in OpenStreetMap. However mapping in OSM – while not being usually based on specialized cartographic or geo-science knowledge and education – in most cases shows a high degree of sophistication and knowledge and cannot really be considered a cartographic equivalent of naïve art.

Should OSM-Carto indeed move more into a naïve art like direction it will certainly struggle with the high degree of complexity and sophistication both of its own legacy and in the OpenStreetMap data itself. How this can work out is an interesting question. On a more technical level this also relates to a problem i pointed out about a year ago.

In the past OSM-Carto has often been vanguard with respect to design in interactive digital maps. Paul Norman recently pointed out several examples for this in his SotM talk on OSM-Carto (second half of the video), in particular polygon size based label scaling, systematic and automated color selection based on perceptual color spaces and multilingual labels. From my own work i could add to that the introduction of program generated randomized periodic patterns which was essentially non-existent in digital cartography before being introduced in OSM-Carto. I predict this kind of innovation will be seen less in the future because development is becoming more focused on localized changes with a relatively low level of innovation and coordination combined with trial-and-error changes like (metaphorically) moving stuff around to test if it looks better that way without a deeper understanding why certain things work and others don’t. This could also lead to the map focus moving more towards the mainstream of OSM based maps as offered by many commercial providers. If the project can continue to attract contributors with the background and vision to develop innovative solutions to the big problems and provide them a supportive environment is something that will remain to be seen.

The most critical point i see with the future development of OSM-Carto is if it can still positively fulfill its function as a feedback tool for mappers encouraging correct and accurate mapping. Anticipating how mappers will react to their data being rendered in a certain way in the map is one of the most difficult tasks for an OSM-Carto developer and getting this wrong can in the long term produce a lot of damage. No matter in what direction the style will steer design wise this is the thing to keep a close eye on. Although useful and constructive feedback for mappers is one of the documented goals of the style now there is no policy or procedural mechanism that ensures changes do not mess with this.

My recommendations for those who care about the public face of OSM and OSM-Carto is to

  • As a mapper resist the urge to adjust your mapping work for its use in the map – either directly as mapping for the renderer or indirectly by mapping or tagging things in a way you believe will make it easier to produce a good looking map for the style developers as i wrote about recently. Instead base your mapping on verifiable observations on the ground. If how OSM-Carto renders certain things you map looks strange or seems to suggest to map things differently also take a look at other map styles and how they render this.
  • Contribute to style development by making pull requests with actual changes. As a non-maintainer you cannot directly make changes but you now only need to convince one maintainer of your change to get it into the map.
  • Hold your maintainers accountable for their work. As explained the style maintainers now have more autonomy in making decisions but this also means resposibility for the changes made. This only works though if map users also provide feedback on changes made. Keep in mind however that just stating you do not like a certain change is not very helpful and convincing. If you see problems with a change in the style that are not a clear technical bug it is usually best to try explaining the problem in light of the documented goals of the style.
  • Develop alternative map styles. From my perspective this is the most important point of all. If OSM-Carto is without alternatives there is much less incentive to substantially improve it than if there are other styles with similar goals but with different design approaches. While there are of course countless general purpose OSM based map styles (only few of them under an open license of course) and also a number of derivatives of OSM-Carto from local communities like the French and German OSM styles most of these concentrate on goals that are fundamentally different from those of OSM-Carto. There are also some style variants being developed as personal projects by OSM community members – like here. What i would really like to see in the next few years is at least about a handful of independent map styles being offered on openstreetmap.org for display each being an honest attempt in providing a good map to the OSM community even if with less manpower than OSM-Carto. Considering how much maps influence the image of OpenStreetMap both from the outside and from the inside through the mappers i think such a development would be most productive and enabling for the OSM-Community.
osm-carto

August 22, 2017
by chris
0 comments

OpenStreetMap-Carto – a look back at the last year

About nine months ago i became co-maintainer of OpenStreetMap-Carto, the map style featured on openstreetmap.org and in many ways the public face of the OpenStreetMap project.

During this time there have been a number of fairly big changes in the project but most of them are actually not that visible to the map user. I here want to take a look at these changes, the state of the project and its past and future as well as looking back at my own personal goals in it during the least year and what i could and could not accomplish.

I have made a lot more contributions to the project before becoming a maintainer than afterwards – which makes sense in light of my view of the role of a maintainer more as someone advising and supervising than actually making changes. My primary goal when becoming a maintainer has been to develop and establish a clearer set of goals and guidelines. Historically there have been next to no documented cartographic guidelines in OpenStreetMap-Carto and as a new contributor most people will struggle understanding when a change to the map is considered desirable and when it is not.

What i managed to do is establish a set of purposes and goals for the style. This is fairly important because what the purpose of a map is is nothing trivial, especially for a map with such a large number of uses like OSM-Carto. What i however was not able to do is establishing a more specific and more practical set of styling guidelines meant to support contributors in practical decisions. My suggestions for such guidelines did not find general approval and there were no alternatives suggested so we ultimately could not agree on something specific here.

This brings me to one of the big organizational changes in OSM-Carto in the last year. With the appointment of three additional maintainers and the group of maintainers therefore becoming both larger and much less homogeneous in terms of backgrounds, interests and viewpoints the aim of making all major decisions based on consensus proved to become increasingly difficult. The decision was made to depart from the strictly consensus based decision making basis and allow each maintainer more autonomy.

Essentially this means each maintainer can make changes and merge changes of non-maintainers even if there are objections from other maintainers. This change in procedure prevents stalling changes where no consensus can be accomplished (which happened quite frequently before) but it also significantly reduced pressure on maintainers to develop and maintain a common strategy and a common vision of the overall direction of the style.

This kind of makes sense in a project that is part of OpenStreetMap which is largely built on do-ocratic principles. But only time can tell if this is actually going to work for a map design project. The whole thing has a lot to do with how methods of cooperation scale and maintain quality. OSM-Carto is a very large project as far as map styles are concerned. At the same time design work is inherently hard to compartmentalize because of the strong inter-dependencies in the visual results. In my eyes a project of this size and complexity can only work in one of the following ways:

  1. There is one central authority that ultimately makes all important decisions. This was the case with OSM-Carto a few years back when Andy was the only maintainer.
  2. Those in positions of making decisions work together not only towards a common goal but also with a common vision how to accomplish this goal. This requires a high degree of compatibility and mutuality between these people.
  3. There is a set of fundamental guiding principles that applies to all work and that is ultimately enforced by the project to ensure a minimum level of homogeneity and consistency of the results and a reliable work environment for contributors. This is for example the case for mapping in OpenStreetMap. The guiding principles are essentially what is described on How We Map and Good practice. And these are ultimately enforced by the DWG in case the community cannot solve issues on their own.

The thing is that the first and second way have problems to scale if the project becomes very large, this is why mapping in OSM with hundreds of thousands of regular contributors does not work this way and which is why OSM-Carto has departed from the consensus principle during the last year. For a map style project like OSM-Carto it would still be possible to use these ways but this would require a fairly stringent hierarchy in the project and having different people with specific roles and tasks and this might not really be workable for a community project. Large commercial design projects (think of architecture, industrial design and fashion for example) generally use the first or second approach although many of them in addition also have extensive documented design guidelines. The question no one can answer so far is if the third way can work in the long term for a cartographic design project like a map style, especially if there are no clear practical rules for design decisions everyone adheres to and by which the quality of changes and the resulting map is measured.

One principal problem with do-ocracies – even more than with other form of governance – is that they are constantly under the risk degrading into an oligarchy by those in a position of power (because they do things – hence do-ocracy) caring more for their own ability to continue doing things than for the project being open and inclusive and fulfilling its purposes. There is no inherent mechanism in do-ocracy that makes people involved serve or even care for the common good or that rewards such actions. In case of OSM as a whole and as a mapping project in particular the most important regulating factor is that no one can map and keep up-to-date the whole world or even just a city on their own. Working together with other mappers – and not just a hand full of them with similar views but a whole lot of them from all over the world – is an inherent necessity of mapping in OSM so do-ocracy works in a mostly self regulating fashion here. But designing a map style, even a complex one like OSM-Carto, is different. It does not require a lot of people and it it will likely run more smoothly on an operational level if there are fewer people involved in decision making. Of course in the past OSM-Carto has essentially already been an aristocratic/meritocratic system which is also a form of oligarchy.

The other and more technical big changes that has happened in OSM-Carto during the last year is the database reload and the move to using a hstore database which finally lifts the restrictions on what tags can be used in the style and which also got rid of old style tagging of multipolygon relations. This change had essentially been in the works for several years already and i did not contribute much to it. While not having much visual impact on its own it is an important basis for future improvements.

In a similar way the earlier move (from the end of 2016) to Mapnik 3 and to newer CartoCSS versions allowed using some new features which were not usable before. Still design wise the style is largely limited by the rather conservative function set of Mapnik and CartoCSS that makes a lot of things that would be nice to have awfully complicated.

So far the look back at the developments of the past year. In the next post i am going to try looking into the future of OSM-Carto a bit.

landsat_engreenland_crop1.ann

August 22, 2017
by chris
0 comments

Greenland in the evening

Thanks to pretty good weather in large parts of Greenland during July and early August there is some good satellite image material available for Greenland from this year. I took this opportunity to assemble two evening images, both of them based primarily on Landsat data from July which means this is not yet snow and ice minimum. The first is from the northeast:

This extends across about 12 degrees of latitude and well illustrates the character of nighttime images from satellites in sun synchroneous orbit. In the very north these are quite similar to the normal daytime images, in fact both converge towards the northern recording limit while further south images change more towards the later evening. To illustrate this here an arbitrary orbit of Landsat (path 24 and 40 if you want to know) with the row numbers indicated.

Row 246 is the top row between the descending (daytime) part and the ascending (nighttime) part. Here some approximate average sun azimuth directions from scenes with these rows:

row average sun azimuth
241 -74
242 -80
243 -87
244 -95
245 -105
246 -116
247 -125
248 -135
1 -144
2 -151
3 -158

As you can see as the satellite transits from night to day the average sun position moves from northwest (i.e. late evening) via west towards south. South direction is reached at row 9 (which is about 72° north) and then direction moves further to the southeast (i.e. morning) which is the typical viewing configuration for lower latitudes.

Here a few crops from different parts of the image.

One of the nice aspects of evening images from the northern hemisphere – apart from the low sun position producing a more dramatic rendering of the relief – is that the sun direction better matches the light direction we are used to for shaded relief so this kind of image is less prone to perceived relief inversion when being viewed by people not so used to the morning illumination more common in satellite images.

The second view is from western Greenland from Disko Bay near Ilulissat with the Jakobshavn Glacier. This also uses a few images from 2016.

Both images can be found in the catalog on services.imagico.de.

August 12, 2017
by chris
7 Comments

Social Engineering in OpenStreetMap

With the use, popularity and economic value of OpenStreetMap increasing significantly over the last years interests and attempts in influencing the direction of the project and its participants also increased a lot. I here want to look a bit at how this works, sometimes in unexpected ways.

I use the term social engineering here in the sense of activities that aim to make people do (or not do) certain things not by educating them and enabling them to make better decisions according to their own priorities but by influencing their perception to serve certain goals without being aware of this. Some might consider this to be defensible or even desirable if it serves an ulterior motive but i would here take a strictly humanistic viewpoint.

Note social engineering does not necessarily require those who actually influence others to be aware of the reasons.

OpenStreetMap is well known among digital social projects and internet communities to have relatively few firm rules and giving its members a lot of freedom how to work within the project. This also provides a lot of room for social engineering of course. On the other hand the OpenStreetMap community is fairly diverse at least in some aspects and quite connected so it is rather difficult to target a specific group of people without others also becoming aware of the activities. This means classical covert social engineering where people are not aware they are being engineered is not that dominant.

But there are a lot of activities in OpenStreetMap that can be considered more or less open social engineering attempting to influence or organize mapping activities. Humanitarian mapping is one of the most iconic examples for this and there are also quite a number of widely used tools like Maproulette that can be used to support such activities.

The number of people mapping in OpenStreetMap on a regular basis makes influencing them to focus on mapping certain things fairly attractive even on a relatively small scale. But this is relatively harmless because

  • it is fairly direct,
  • the influence and often even the interests behind it can usually be readily seen by the mapper,
  • if it goes over the top such activity can quite easily be shut down or moderated by the community.

In other words: Mappers engaging in a HOT project are not that deeply manipulated because they do not believe they participate in such activity for reasons that are fundamentally different from the actual reasons. They might not know exactly how much the people in the area they map in profit from what they do and what economic interests the organization planning the mapping activity has exactly but in broad strokes they still make an informed decision to participate.

None the less such activities are not without problems in OpenStreetMap, especially since they can affect the social balance in the project. Local mappers mapping their environment for example can often feel bullied or patronized if organized mapping activities from abroad start in their area.

This is however not primarily what i want to discuss here. I want to focus more on a more subtle form of social engineering i would call social self engineering. A good example to show how this works in OpenStreetMap is what we call mapping for the renderer.

Mapping for the renderer in its simplest form occurs when people enter data in the OSM database not because it represents an observation on the ground but to achieve a certain result in a map. Examples include

  • strings being entered into name tags of features that are not names in an attempt to place labels.
  • place nodes being moved so a label appears at a more appealing position.
  • classifications of places or roads being inflated to make them appear earlier or more prominently in maps.
  • tags being omitted from features because their appearance in the map is considered ugly.

Compared to normal social engineering the roles are kind of reversed here. The one whose behavior is changed is the one who actually makes the decision (therefore self engineering) and the influence to do that comes from someone (the designer of the map) who is often not even aware that this might happen and is usually not really happy about this being the case.

This simple form of mapping for the renderer is widespread and those who do this – while they usually know they are doing something that is not quite right – are usually not fully aware of why they are motivated to do so and what consequences this has in terms of data quality. In most cases they simply consider this a kind of shortcut or procedural cheating. The specific problems of the whole field of interaction between map designers and mappers by the way is something i have discussed in more depth before.

There is another variant of mapping for the renderer (or more generally: mapping with specific consideration for a data user) that is less direct that i would call preemptive mapping for the renderer. A good example for this is the popular is_in tag (and variants of it like is_in:country) which indicate the country (or other entity) for example a certain town or other place is located in. I am not aware of any application that depends on this tag to work properly. Taginfo lists Nominatim as the only application actually using this. The very idea that it makes sense in a spatial database to manually tag the fact that a certain geometry is located within another geometry is preposterous. Still there are more than 2 million objects with this tag in the OSM database.

Why this happens has a lot to do with cargo cult. In fact quite a lot of tagging ideas and proposals developed and communicated in OSM can largely be classified as cargo cult based and this is one of the reasons why many mappers look down on tagging discussions. The very idea that any desire to document an observation on the ground in OSM needs to go through some universal classification system is inherently prone to wishful thinking. Sometimes a sophisticated structured tagging system is developed to make it attractive for developers to implement which luckily often ensures it is neither used by mappers nor data users. The idea of an importance tag that re-surfaces every few months somewhere falls into the same category. Out of the desire to have an objective and universal measure of importance for things people invent an importance tag and hope the mere existence of this tag will actually produce such a measure.

But not all of such mapping ideas are non-successful. We also have quite a few tags that were invented because someone thought it would make it easier for data users to have this and where mappers keep investing a lot of time to actually tag this – like the mentioned is_in. Or the idea to map things as polygons that could just as accurately be mapped as linear geometries or nodes – like here.

The problem about this is not only the waste of mapping resources, it also sometimes encourages data users to not invest into interpreting more sensible mapping methods. Preemptive mapping for the renderer – even if based on considerations that make some sense – always aims for technologically conservative data interpretation. This way it hampers actual innovation and investment in intelligent and sophisticated interpretation of mapper centered tagging and mapping methods. The is_in tag for example was invented back in the early days of OpenStreetMap where there were no boundary relations that could be used to automatically check where a place is located. So instead of inventing such a better suited solution for the problem someone took the technologically simple route to put the burden of this on the mapper. Luckily in this case this did not prevent the better solution of boundary relations and algorithmic point-in-polygon tests being developed and established.

And while attempts from data users to directly influence mappers to create data that is easy to interpret for them are often quite easily spotted and rebutted the preemptive variant from side of the mapper is practically often less obvious. And also the motives why a mapper uses or supports a certain problematic tagging are often complicated and unclear.

So if – as a mapper – you want to really support and encourage competent data use better ignore any assumed interests of data users and map as you as a mapper can most efficiently represent your observations on the ground in data form.

new_guinea_landsat_980

July 28, 2017
by chris
0 comments

Perceived and actual quality in image sources for OpenStreetMap

One of the pitfalls when OpenStreetMap contributors map remotely based on aerial or satellite images without recent on-the-ground knowledge is that it sometimes leads to “improving” seemingly inaccurate data based on outdated information. The most iconic examples for this are cases where buildings have been demolished but remain visible in popular image sources and mappers keep recreating these buildings because they seem to be missing in the OpenStreetMap database.

The whole problem has increased in recent years because the number of different image sources being used as a data source for mapping has increased a lot and at the same time the average age of image sources tends to increase because images are not always updated in regular intervals. Quality of imagery is ultimately a multi-dimensional measure but to mappers, especially if they are unfamiliar with the area they look at, image resolution is the most observable dimension of quality and often mappers are inclined to consider the highest resolution image source as the best one. The most important dimensions of image quality probably are:

  • spatial resolution
  • positional accuracy – which is not necessarily correlated with resolution – see my post on the subject.
  • age – lack of consideration for this is the cause of the problem described above.
  • seasonality – images from different seasons often have different suitability for mapping different things. A frequent problem of high resolution images from Bing and DigitalGlobe at higher latitudes for example is that images are often from early in the year (where weather is frequently better than later) and most things you’d want to map are hidden by snow.
  • cloud freeness

Human made infrastructure is not the only situation where the combination of these different dimensions of quality for a certain mapping task leads to the highest resolution image source clearly not being the best. I recently came across a great example to illustrate this.

Background story: Back in 2013 i mapped the glaciers of New Guinea in OpenStreetMap based on then new Landsat imagery. Bing did not back then and still does not offer any useful imagery in the area.

With the recently released DigitalGlobe image layers there is now high resolution coverage of this but images are not very new, likely from around 2011-2012. More importantly though the Northwall Firn area has a significant offset. Which leads to the new mapping made by palimpadum made in good faith to actually be less accurate than before.

I put up two new images on the OSM images for mapping now which can help actually updating and improving the glaciers and possibly other thing in the area. These images also well illustrate the other dimensions of image quality for mapping.

The newer of the two images is from 2016 by Sentinel-2. It features the most recent cloud free open data image of the area currently available. But it has a serious flaw: It features fresh snow on the highest areas of the mountains making accurate mapping of the glaciers based on this impossible.

The other image is from 2015 by Landsat 8 and shows the glaciers free of fresh snow so you can well see their extent. So overall we have:

  • The DigitalGlobe image which offers the highest resolution but is the least up-to-date and has at least in small parts a fairly bad positional accuracy. Also large parts are not usable due to clouds.
  • The Sentinel-2 image which is the most up-to-date but is impaired by fresh snow.
  • The Landsat 8 image which is slightly lower resolution than that of Sentinel-2 but offers a snow free view of the glaciers.

Positional accuracy of the Sentinel-2 and Landsat images is very similar in the area by the way. And finally clouds are also the reason why we have no newer images from either Landsat or Sentinel-2 that offer a clear view of the area.

For mapping the glaciers you would want to use the Landsat image here of course. For mapping the mine which tends to change quite rapidly the Sentinel-2 image is probably best. Other things like lakes or landcover can be safely mapped from the DigitalGlobe data although it is a good idea check and possibly correct image alignment using the Landsat and Sentinel-2 data as reference.

Even to those who are not really interested in glacier mapping i would recommend loading the three image sources in your favorite OSM editor and switch between the different images a bit to get a feeling for the differences described. Here the URLs for the OSMIM layers:

tms:http://imagico.de/map/osmim_tiles.php?layer=S2A_R088_S05_20160812T011732&z={zoom}&x={x}&y={-y}
tms:http://imagico.de/map/osmim_tiles.php?layer=LC81030632015286LGN00&z={zoom}&x={x}&y={-y}

slstr_temp_980

July 23, 2017
by chris
4 Comments

Seeing in the dark

As many of you have probably already read elsewhere there has been a large iceberg calving event in the Antarctic recently at the Larsen C ice shelf at the east side of the Antarctic peninsula. The interesting thing about this is much less the event itself (yes, it is a large iceberg and yes, it probably is part of a general decline of ice shelves at the Antarctic Peninsula over the last century or so but no, this is not in any way record breaking or particularly alarming if you look at things with a time horizon of more than a decade). The interesting thing is more the style of communication about it. Icebergs of this size and larger occur in the Antarctic in intervals of several years to a few decades. It is known that many of the large ice shelf plates show a cyclic growth pattern interrupted by the calving of large icebergs like this one, sometimes on a time scale of more than fifty years. This is the first such event that was closely tracked remotely. In the past people usually noticed such events a few days or weeks after they happened while in this case there was a lot of anticipation with the development of cracks in the ice and for months we kind of had frequent predictions of the calving being expected any day now – or in other words: A lot of people apparently were quite wrong in their assumptions on how exactly such a calving would progress.

I am not really that familiar with the dynamics of ice myself so i won’t analyze this in detail. The important thing to keep in mind is that ice at the scale of tens to hundreds of kilometers and under the pressures involved here behaves very differently from how you would expect it to behave in analogy to ice on a lake or a river you might observe up close.

The other interesting thing about this iceberg calving is that it happened in the middle of the polar night. Since the Larsen C ice shelf is located south of the Antarctic Circle this means permanent darkness during this time. So how do you observe an event in the dark via open data satellite images?

One of the most interesting possibilities for nighttime observation is the Day/Night Band of the VIIRS instrument. This is a visual/NIR range sensor capable of recording very low light levels. This is best known from the nighttime city lights visualizations you can find – which feature artificial colors.

This sensor is capable of recording images illuminated by moonlight or other light sources like the aurora. So we got some of the earliest images of the free floating iceberg from VIIRS.

Antarctic in Moonlight – VIIRS Day/Night Band

This image uses a logarithmic scale, actual radiance varies across the shown area by about three orders of magnitude.

Another much older and better established way for nighttime observation is in the thermal infrared. In the long wavelength infrared the earth surface is emitting light based on its temperature and this emission is nearly the same during the day and night. Clouds emit in the long wavelength infrared as well which makes thermal infrared imaging an attractive and widely used option for 24h weather observations by weather satellites.

Thermal infrared data is available from the previously mentioned VIIRS instrument but also from MODIS and Sentinel-3 SLSTR. Here an example from the area from SLSTR.

Glowing in the dark – thermal infrared emissions recorded by Sentinel-3 SLSTR

During the polar night the highest temperature occurs at open water surfaces like on the west side of the Antarctic peninsula on the upper left and the lowest temperatures in this area occur on the ice shelf surfaces and the low elevation parts of the glaciers on the east side of the Antarctic peninsula. The outline of the new iceberg is well visible because of the relatively warm open water and thin ice in the gap between the iceberg and the ice shelf.

Thermal infrared images are also available in significantly higher resolution from Landsat and ASTER images. Landsat is not regularly recording nighttime images in the Antarctic but there have been images recorded recently in the area because of the special interest in the event. Here an assembly of images from July 12 and July 14.

High resolution thermal infrared by Landsat 8

You can see there is a significant amount of clouds, especially in the scenes on the right side obscuring the ice features under it. Here a magnified crop showing the level of detail available.

ASTER so far does not offer any recent images of the area. In principle ASTER currently provides the highest resolution thermal imaging capability – both in the open data domain and in the world of publicly available systems in general – though there are probably classified higher resolution thermal infrared sensors in operation.

So far everything shown has been from passive sensors recording natural reflections and emissions. The other possibility for nighttime observations is through active sensors, in particular radar. This has the added advantage that it is independent of clouds (which are essentially transparent at the wavelengths used).

Radar however is not initially an image recording system. Take the classical scope of a navigational radar system for example – the two-dimensional image is built by plotting the signal runtime of the reflected signal received from the different directions. Essentially the same applies to radar data from satellites.

Satellite based radar systems that produce open data products currently exist on the Sentinel-3 and Sentinel-1 satellites. Sentinel-3 features a Radar altimeter capable of measuring the surface elevation in a small strip directly below the satellite. This is not really of interest for the purpose of observing an iceberg calving.

Sentinel-1 on the other hand features a classical imaging radar system. The most widely used data product from Sentinel-1 is the Ground Range Detected data which is commonly visualized either in form of a grayscale image or a false color image in case several polarizations are combined. Here is an example of the area under discussion.

Looking through clouds – imaging radar from Sentinel-1

Note while this looks like an image viewed from above it is not. The two-dimensional image is created by interpreting run time as distance (which is a quite accurate assumption) and then mapping the distance to a point on an ellipsoid model of earth (which is an extreme simplification). In other words: the image shows the radar reflection strength at the position where it would have come from if the earth surface was a perfect ellipsoid. For the ice shelf this is not such a big deal since it raises no more than maybe a hundred meters above the water level which is very close to the ellipsoid but you should not ever try to derive any positional information directly from such an image on land.

You can also see that noise levels in the radar data tend to be much higher than in optical imagery. After all we are talking here about a signal that is sent out across several hundred kilometers, is then reflected and a small faction of it travels back across several hundred kilometers again to be recorded and analyzed. Overall extracting quantitative information from radar data is usually much more difficult than from optical imagery. But the great advantage is of course it is not affected by clouds.

Since this might be confusing for readers also looking at images of this event elsewhere – images shown here are all in UTM projection and approximately north oriented.

S2B_R053_S51_20170630T142859_expose.ann

July 7, 2017
by chris
0 comments

First Sentinel-2B data and Sentinel-3 L2 data

Since a few days ago the first data from Sentinel-2B is publicly available.

Data access is offered through a separate instance of the download infrastructure so you will have to adjust any download scripts or tools you might have. It seems Sentinel-2B is going to be operated in the same way as Sentinel-2A, meaning with priority of Europe, Africa and Greenland and less frequent coverage of larger land masses of the rest of the world. I added the daily numbers from Sentinel-2B to my satellite image numbers page, right now image recordings are not quite at the same capacity as with Sentinel-2A and images from before the last days of June are not yet available.

When Sentinel-2B is recording the same amount of data as Sentinel-2A it is supposed to reduce the typical recording interval to half – from normally 10 days in Europe and Africa to 5 days. The orbit overlap at high latitudes also means that for Greenland and the European Arctic areas north of 75° latitude up to 82.8 ° will be covered daily compared to 79.3° previously – since ESA in contrast to the USGS does not reduce the number of recordings at high latitudes due to the overlaps.

Here two sample images from the last days:

Upsala Glacier, Patagonia by Sentinel-2B

Sevastopol, Crimea by Sentinel-2B

Apart from the new Sentinel-2 images we now get public access to a few more Sentinel-3 data products here, here and here. This is more or less as expected although these are mostly relatively specialized products and not the kind of general purpose Level 2 products you might expect when you look at the MODIS products for example. I am probably going to write about this in more detail soon.

A short note explaining the different processing levels of satellite images:

  • Level-0 is usually more or less the raw data coming from the satellite.
  • Level-1 commonly includes basic calibrations of the characteristics of sensor and optics as well as geo-referencing.
  • Level-2 is mostly about compensating for undesired influences in the image, especially those resulting from the atmosphere, the view perspective and illumination. The aim is usually to characterize the earth surface in a way that is independent of the specific recording situation of the satellite and might already be targeted at a specific thematic application.
  • Level-3 is less clearly defined and usually refers to time aggregated data, further interpretations of the data or combinations with other data sources.

Some might remember the trend i postulated some time ago regarding the timing of Sentinel program satellite launches and data releases. We can now carry this forward a bit:

  • Sentinel-1A: Launch 3 Apr 2014, public data access since 9 May 2014 (1 month)
  • Sentinel-2A: Launch 23 Jun 2015, access since end November 2015 (5 months)
  • Sentinel-1B: Launch 25 Apr 2016, access since 26 Sep 2016 (5 months)
  • Sentinel-3A: Launch 16 Feb 2016:
    • OLCI L1 access since 20 Oct 2016 (8 months)
    • SLSTR L1 access since 18 Nov 2016 (9 months)
    • partial OLCI/SLSTR L2 access since 5/6 Jul 2017 (>16 months)
    • further L2 products: 16+ months and counting…
    • Any data from before 20 Oct 2016: missing and unclear if it will ever be available.
  • Sentinel-2B: Launch 7 Mar 2017, access since end 5 Jul 2017 (4 months)

As you can see the trend is broken and i am sure everyone appreciates the speedup with Sentinel-2B compared Sentinel-2A but the release policy on Sentinel-3 is – how should i put it: remarkable. Remember the level 2 products are all things that have been routinely produced for more than a year – just not being made available publicly despite the regulations requiring that. You might say that releasing level 1 data is enough and everything else is just optional addon services but if you look at MODIS data use for example i am pretty sure that more than 90 percent of MODIS data users use products of level 2 or above. So for the purpose of ensuring a wide adoption of Sentinel-3 data use (and i would assume at least the EU commission has this goal) holding back level 2 data is just brainless.

Tierra del Fuego in Winter 2017

July 4, 2017
by chris
0 comments

Winter impressions 2017

As usual when we have midsummer here on the northern hemisphere there is winter in the south so here are two winter impressions from Landsat images from the southern hemisphere from the last weeks.

The first is a view of the southern part of Tierra del Fuego and the Beagle Channel:

with Ushuaia, the southmost city on Earth:

The second image features South Georgia in a rare nearly cloud free appearance near mid winter:

The South Georgia image is also in the catalog on services.imagico.de.