August 12, 2017
by chris

Social Engineering in OpenStreetMap

With the use, popularity and economic value of OpenStreetMap increasing significantly over the last years interests and attempts in influencing the direction of the project and its participants also increased a lot. I here want to look a bit at how this works, sometimes in unexpected ways.

I use the term social engineering here in the sense of activities that aim to make people do (or not do) certain things not by educating them and enabling them to make better decisions according to their own priorities but by influencing their perception to serve certain goals without being aware of this. Some might consider this to be defensible or even desirable if it serves an ulterior motive but i would here take a strictly humanistic viewpoint.

Note social engineering does not necessarily require those who actually influence others to be aware of the reasons.

OpenStreetMap is well known among digital social projects and internet communities to have relatively few firm rules and giving its members a lot of freedom how to work within the project. This also provides a lot of room for social engineering of course. On the other hand the OpenStreetMap community is fairly diverse at least in some aspects and quite connected so it is rather difficult to target a specific group of people without others also becoming aware of the activities. This means classical covert social engineering where people are not aware they are being engineered is not that dominant.

But there are a lot of activities in OpenStreetMap that can be considered more or less open social engineering attempting to influence or organize mapping activities. Humanitarian mapping is one of the most iconic examples for this and there are also quite a number of widely used tools like Maproulette that can be used to support such activities.

The number of people mapping in OpenStreetMap on a regular basis makes influencing them to focus on mapping certain things fairly attractive even on a relatively small scale. But this is relatively harmless because

  • it is fairly direct,
  • the influence and often even the interests behind it can usually be readily seen by the mapper,
  • if it goes over the top such activity can quite easily be shut down or moderated by the community.

In other words: Mappers engaging in a HOT project are not that deeply manipulated because they do not believe they participate in such activity for reasons that are fundamentally different from the actual reasons. They might not know exactly how much the people in the area they map in profit from what they do and what economic interests the organization planning the mapping activity has exactly but in broad strokes they still make an informed decision to participate.

None the less such activities are not without problems in OpenStreetMap, especially since they can affect the social balance in the project. Local mappers mapping their environment for example can often feel bullied or patronized if organized mapping activities from abroad start in their area.

This is however not primarily what i want to discuss here. I want to focus more on a more subtle form of social engineering i would call social self engineering. A good example to show how this works in OpenStreetMap is what we call mapping for the renderer.

Mapping for the renderer in its simplest form occurs when people enter data in the OSM database not because it represents an observation on the ground but to achieve a certain result in a map. Examples include

  • strings being entered into name tags of features that are not names in an attempt to place labels.
  • place nodes being moved so a label appears at a more appealing position.
  • classifications of places or roads being inflated to make them appear earlier or more prominently in maps.
  • tags being omitted from features because their appearance in the map is considered ugly.

Compared to normal social engineering the roles are kind of reversed here. The one whose behavior is changed is the one who actually makes the decision (therefore self engineering) and the influence to do that comes from someone (the designer of the map) who is often not even aware that this might happen and is usually not really happy about this being the case.

This simple form of mapping for the renderer is widespread and those who do this – while they usually know they are doing something that is not quite right – are usually not fully aware of why they are motivated to do so and what consequences this has in terms of data quality. In most cases they simply consider this a kind of shortcut or procedural cheating. The specific problems of the whole field of interaction between map designers and mappers by the way is something i have discussed in more depth before.

There is another variant of mapping for the renderer (or more generally: mapping with specific consideration for a data user) that is less direct that i would call preemptive mapping for the renderer. A good example for this is the popular is_in tag (and variants of it like is_in:country) which indicate the country (or other entity) for example a certain town or other place is located in. I am not aware of any application that depends on this tag to work properly. Taginfo lists Nominatim as the only application actually using this. The very idea that it makes sense in a spatial database to manually tag the fact that a certain geometry is located within another geometry is preposterous. Still there are more than 2 million objects with this tag in the OSM database.

Why this happens has a lot to do with cargo cult. In fact quite a lot of tagging ideas and proposals developed and communicated in OSM can largely be classified as cargo cult based and this is one of the reasons why many mappers look down on tagging discussions. The very idea that any desire to document an observation on the ground in OSM needs to go through some universal classification system is inherently prone to wishful thinking. Sometimes a sophisticated structured tagging system is developed to make it attractive for developers to implement which luckily often ensures it is neither used by mappers nor data users. The idea of an importance tag that re-surfaces every few months somewhere falls into the same category. Out of the desire to have an objective and universal measure of importance for things people invent an importance tag and hope the mere existence of this tag will actually produce such a measure.

But not all of such mapping ideas are non-successful. We also have quite a few tags that were invented because someone thought it would make it easier for data users to have this and where mappers keep investing a lot of time to actually tag this – like the mentioned is_in. Or the idea to map things as polygons that could just as accurately be mapped as linear geometries or nodes – like here.

The problem about this is not only the waste of mapping resources, it also sometimes encourages data users to not invest into interpreting more sensible mapping methods. Preemptive mapping for the renderer – even if based on considerations that make some sense – always aims for technologically conservative data interpretation. This way it hampers actual innovation and investment in intelligent and sophisticated interpretation of mapper centered tagging and mapping methods. The is_in tag for example was invented back in the early days of OpenStreetMap where there were no boundary relations that could be used to automatically check where a place is located. So instead of inventing such a better suited solution for the problem someone took the technologically simple route to put the burden of this on the mapper. Luckily in this case this did not prevent the better solution of boundary relations and algorithmic point-in-polygon tests being developed and established.

And while attempts from data users to directly influence mappers to create data that is easy to interpret for them are often quite easily spotted and rebutted the preemptive variant from side of the mapper is practically often less obvious. And also the motives why a mapper uses or supports a certain problematic tagging are often complicated and unclear.

So if – as a mapper – you want to really support and encourage competent data use better ignore any assumed interests of data users and map as you as a mapper can most efficiently represent your observations on the ground in data form.


July 28, 2017
by chris

Perceived and actual quality in image sources for OpenStreetMap

One of the pitfalls when OpenStreetMap contributors map remotely based on aerial or satellite images without recent on-the-ground knowledge is that it sometimes leads to “improving” seemingly inaccurate data based on outdated information. The most iconic examples for this are cases where buildings have been demolished but remain visible in popular image sources and mappers keep recreating these buildings because they seem to be missing in the OpenStreetMap database.

The whole problem has increased in recent years because the number of different image sources being used as a data source for mapping has increased a lot and at the same time the average age of image sources tends to increase because images are not always updated in regular intervals. Quality of imagery is ultimately a multi-dimensional measure but to mappers, especially if they are unfamiliar with the area they look at, image resolution is the most observable dimension of quality and often mappers are inclined to consider the highest resolution image source as the best one. The most important dimensions of image quality probably are:

  • spatial resolution
  • positional accuracy – which is not necessarily correlated with resolution – see my post on the subject.
  • age – lack of consideration for this is the cause of the problem described above.
  • seasonality – images from different seasons often have different suitability for mapping different things. A frequent problem of high resolution images from Bing and DigitalGlobe at higher latitudes for example is that images are often from early in the year (where weather is frequently better than later) and most things you’d want to map are hidden by snow.
  • cloud freeness

Human made infrastructure is not the only situation where the combination of these different dimensions of quality for a certain mapping task leads to the highest resolution image source clearly not being the best. I recently came across a great example to illustrate this.

Background story: Back in 2013 i mapped the glaciers of New Guinea in OpenStreetMap based on then new Landsat imagery. Bing did not back then and still does not offer any useful imagery in the area.

With the recently released DigitalGlobe image layers there is now high resolution coverage of this but images are not very new, likely from around 2011-2012. More importantly though the Northwall Firn area has a significant offset. Which leads to the new mapping made by palimpadum made in good faith to actually be less accurate than before.

I put up two new images on the OSM images for mapping now which can help actually updating and improving the glaciers and possibly other thing in the area. These images also well illustrate the other dimensions of image quality for mapping.

The newer of the two images is from 2016 by Sentinel-2. It features the most recent cloud free open data image of the area currently available. But it has a serious flaw: It features fresh snow on the highest areas of the mountains making accurate mapping of the glaciers based on this impossible.

The other image is from 2015 by Landsat 8 and shows the glaciers free of fresh snow so you can well see their extent. So overall we have:

  • The DigitalGlobe image which offers the highest resolution but is the least up-to-date and has at least in small parts a fairly bad positional accuracy. Also large parts are not usable due to clouds.
  • The Sentinel-2 image which is the most up-to-date but is impaired by fresh snow.
  • The Landsat 8 image which is slightly lower resolution than that of Sentinel-2 but offers a snow free view of the glaciers.

Positional accuracy of the Sentinel-2 and Landsat images is very similar in the area by the way. And finally clouds are also the reason why we have no newer images from either Landsat or Sentinel-2 that offer a clear view of the area.

For mapping the glaciers you would want to use the Landsat image here of course. For mapping the mine which tends to change quite rapidly the Sentinel-2 image is probably best. Other things like lakes or landcover can be safely mapped from the DigitalGlobe data although it is a good idea check and possibly correct image alignment using the Landsat and Sentinel-2 data as reference.

Even to those who are not really interested in glacier mapping i would recommend loading the three image sources in your favorite OSM editor and switch between the different images a bit to get a feeling for the differences described. Here the URLs for the OSMIM layers:



July 23, 2017
by chris

Seeing in the dark

As many of you have probably already read elsewhere there has been a large iceberg calving event in the Antarctic recently at the Larsen C ice shelf at the east side of the Antarctic peninsula. The interesting thing about this is much less the event itself (yes, it is a large iceberg and yes, it probably is part of a general decline of ice shelves at the Antarctic Peninsula over the last century or so but no, this is not in any way record breaking or particularly alarming if you look at things with a time horizon of more than a decade). The interesting thing is more the style of communication about it. Icebergs of this size and larger occur in the Antarctic in intervals of several years to a few decades. It is known that many of the large ice shelf plates show a cyclic growth pattern interrupted by the calving of large icebergs like this one, sometimes on a time scale of more than fifty years. This is the first such event that was closely tracked remotely. In the past people usually noticed such events a few days or weeks after they happened while in this case there was a lot of anticipation with the development of cracks in the ice and for months we kind of had frequent predictions of the calving being expected any day now – or in other words: A lot of people apparently were quite wrong in their assumptions on how exactly such a calving would progress.

I am not really that familiar with the dynamics of ice myself so i won’t analyze this in detail. The important thing to keep in mind is that ice at the scale of tens to hundreds of kilometers and under the pressures involved here behaves very differently from how you would expect it to behave in analogy to ice on a lake or a river you might observe up close.

The other interesting thing about this iceberg calving is that it happened in the middle of the polar night. Since the Larsen C ice shelf is located south of the Antarctic Circle this means permanent darkness during this time. So how do you observe an event in the dark via open data satellite images?

One of the most interesting possibilities for nighttime observation is the Day/Night Band of the VIIRS instrument. This is a visual/NIR range sensor capable of recording very low light levels. This is best known from the nighttime city lights visualizations you can find – which feature artificial colors.

This sensor is capable of recording images illuminated by moonlight or other light sources like the aurora. So we got some of the earliest images of the free floating iceberg from VIIRS.

Antarctic in Moonlight – VIIRS Day/Night Band

This image uses a logarithmic scale, actual radiance varies across the shown area by about three orders of magnitude.

Another much older and better established way for nighttime observation is in the thermal infrared. In the long wavelength infrared the earth surface is emitting light based on its temperature and this emission is nearly the same during the day and night. Clouds emit in the long wavelength infrared as well which makes thermal infrared imaging an attractive and widely used option for 24h weather observations by weather satellites.

Thermal infrared data is available from the previously mentioned VIIRS instrument but also from MODIS and Sentinel-3 SLSTR. Here an example from the area from SLSTR.

Glowing in the dark – thermal infrared emissions recorded by Sentinel-3 SLSTR

During the polar night the highest temperature occurs at open water surfaces like on the west side of the Antarctic peninsula on the upper left and the lowest temperatures in this area occur on the ice shelf surfaces and the low elevation parts of the glaciers on the east side of the Antarctic peninsula. The outline of the new iceberg is well visible because of the relatively warm open water and thin ice in the gap between the iceberg and the ice shelf.

Thermal infrared images are also available in significantly higher resolution from Landsat and ASTER images. Landsat is not regularly recording nighttime images in the Antarctic but there have been images recorded recently in the area because of the special interest in the event. Here an assembly of images from July 12 and July 14.

High resolution thermal infrared by Landsat 8

You can see there is a significant amount of clouds, especially in the scenes on the right side obscuring the ice features under it. Here a magnified crop showing the level of detail available.

ASTER so far does not offer any recent images of the area. In principle ASTER currently provides the highest resolution thermal imaging capability – both in the open data domain and in the world of publicly available systems in general – though there are probably classified higher resolution thermal infrared sensors in operation.

So far everything shown has been from passive sensors recording natural reflections and emissions. The other possibility for nighttime observations is through active sensors, in particular radar. This has the added advantage that it is independent of clouds (which are essentially transparent at the wavelengths used).

Radar however is not initially an image recording system. Take the classical scope of a navigational radar system for example – the two-dimensional image is built by plotting the signal runtime of the reflected signal received from the different directions. Essentially the same applies to radar data from satellites.

Satellite based radar systems that produce open data products currently exist on the Sentinel-3 and Sentinel-1 satellites. Sentinel-3 features a Radar altimeter capable of measuring the surface elevation in a small strip directly below the satellite. This is not really of interest for the purpose of observing an iceberg calving.

Sentinel-1 on the other hand features a classical imaging radar system. The most widely used data product from Sentinel-1 is the Ground Range Detected data which is commonly visualized either in form of a grayscale image or a false color image in case several polarizations are combined. Here is an example of the area under discussion.

Looking through clouds – imaging radar from Sentinel-1

Note while this looks like an image viewed from above it is not. The two-dimensional image is created by interpreting run time as distance (which is a quite accurate assumption) and then mapping the distance to a point on an ellipsoid model of earth (which is an extreme simplification). In other words: the image shows the radar reflection strength at the position where it would have come from if the earth surface was a perfect ellipsoid. For the ice shelf this is not such a big deal since it raises no more than maybe a hundred meters above the water level which is very close to the ellipsoid but you should not ever try to derive any positional information directly from such an image on land.

You can also see that noise levels in the radar data tend to be much higher than in optical imagery. After all we are talking here about a signal that is sent out across several hundred kilometers, is then reflected and a small faction of it travels back across several hundred kilometers again to be recorded and analyzed. Overall extracting quantitative information from radar data is usually much more difficult than from optical imagery. But the great advantage is of course it is not affected by clouds.

Since this might be confusing for readers also looking at images of this event elsewhere – images shown here are all in UTM projection and approximately north oriented.


July 7, 2017
by chris

First Sentinel-2B data and Sentinel-3 L2 data

Since a few days ago the first data from Sentinel-2B is publicly available.

Data access is offered through a separate instance of the download infrastructure so you will have to adjust any download scripts or tools you might have. It seems Sentinel-2B is going to be operated in the same way as Sentinel-2A, meaning with priority of Europe, Africa and Greenland and less frequent coverage of larger land masses of the rest of the world. I added the daily numbers from Sentinel-2B to my satellite image numbers page, right now image recordings are not quite at the same capacity as with Sentinel-2A and images from before the last days of June are not yet available.

When Sentinel-2B is recording the same amount of data as Sentinel-2A it is supposed to reduce the typical recording interval to half – from normally 10 days in Europe and Africa to 5 days. The orbit overlap at high latitudes also means that for Greenland and the European Arctic areas north of 75° latitude up to 82.8 ° will be covered daily compared to 79.3° previously – since ESA in contrast to the USGS does not reduce the number of recordings at high latitudes due to the overlaps.

Here two sample images from the last days:

Upsala Glacier, Patagonia by Sentinel-2B

Sevastopol, Crimea by Sentinel-2B

Apart from the new Sentinel-2 images we now get public access to a few more Sentinel-3 data products here, here and here. This is more or less as expected although these are mostly relatively specialized products and not the kind of general purpose Level 2 products you might expect when you look at the MODIS products for example. I am probably going to write about this in more detail soon.

A short note explaining the different processing levels of satellite images:

  • Level-0 is usually more or less the raw data coming from the satellite.
  • Level-1 commonly includes basic calibrations of the characteristics of sensor and optics as well as geo-referencing.
  • Level-2 is mostly about compensating for undesired influences in the image, especially those resulting from the atmosphere, the view perspective and illumination. The aim is usually to characterize the earth surface in a way that is independent of the specific recording situation of the satellite and might already be targeted at a specific thematic application.
  • Level-3 is less clearly defined and usually refers to time aggregated data, further interpretations of the data or combinations with other data sources.

Some might remember the trend i postulated some time ago regarding the timing of Sentinel program satellite launches and data releases. We can now carry this forward a bit:

  • Sentinel-1A: Launch 3 Apr 2014, public data access since 9 May 2014 (1 month)
  • Sentinel-2A: Launch 23 Jun 2015, access since end November 2015 (5 months)
  • Sentinel-1B: Launch 25 Apr 2016, access since 26 Sep 2016 (5 months)
  • Sentinel-3A: Launch 16 Feb 2016:
    • OLCI L1 access since 20 Oct 2016 (8 months)
    • SLSTR L1 access since 18 Nov 2016 (9 months)
    • partial OLCI/SLSTR L2 access since 5/6 Jul 2017 (>16 months)
    • further L2 products: 16+ months and counting…
    • Any data from before 20 Oct 2016: missing and unclear if it will ever be available.
  • Sentinel-2B: Launch 7 Mar 2017, access since end 5 Jul 2017 (4 months)

As you can see the trend is broken and i am sure everyone appreciates the speedup with Sentinel-2B compared Sentinel-2A but the release policy on Sentinel-3 is – how should i put it: remarkable. Remember the level 2 products are all things that have been routinely produced for more than a year – just not being made available publicly despite the regulations requiring that. You might say that releasing level 1 data is enough and everything else is just optional addon services but if you look at MODIS data use for example i am pretty sure that more than 90 percent of MODIS data users use products of level 2 or above. So for the purpose of ensuring a wide adoption of Sentinel-3 data use (and i would assume at least the EU commission has this goal) holding back level 2 data is just brainless.

Tierra del Fuego in Winter 2017

July 4, 2017
by chris

Winter impressions 2017

As usual when we have midsummer here on the northern hemisphere there is winter in the south so here are two winter impressions from Landsat images from the southern hemisphere from the last weeks.

The first is a view of the southern part of Tierra del Fuego and the Beagle Channel:

with Ushuaia, the southmost city on Earth:

The second image features South Georgia in a rare nearly cloud free appearance near mid winter:

The South Georgia image is also in the catalog on

June 30, 2017
by chris

Another survey

There is another survey for open data satellite image users, this time from the Copernicus program:

In contrast to the Landsat survey i featured recently – which by the way you can still participate in – this one is the usual multiple choice survey. It is not anonymous though, they ask you to provide your name and other details about yourself.

The survey questions have quite clearly been put together with the aim to put a lot of subjects into the survey while formally limiting it to no more than 20 questions. The result are many compound questions which try to ask several things at once which will inevitably lead to the answers not being very useful. For example there are the following two questions:

11. Data products - Please indicate your level of satisfaction
* Sentinel-1 data - Poor/Average/Good/Excellent or N/A
* Sentinel-2 data - Poor/Average/Good/Excellent or N/A
* Sentinel-3 data - Poor/Average/Good/Excellent or N/A

12. Processing levels and data formats -
Please indicate your level of satisfaction
* Processing levels (L0, L1, L2, ...) - Poor/Average/Good/Excellent or N/A
* Data formats - Poor/Average/Good/Excellent or N/A

There first is a question of general satisfaction with the data, separately for the different platforms – which makes sense. But then there is the more specific question of satisfaction with specifics of the data format and processing – which only makes sense to look at for each satellite platform individually but which can only be answered in total.

The sad thing about this is the aim of keeping the number of questions low is to allow people to participate in the survey with relatively little time but this kind of compound or aggregated questions actually make answering much slower because you will need to weight your observations to be able to give an answer.

And in the results of the survey you then might see that satisfaction with the data format is overwhelmingly in the Average-Good range while in fact users are often extremely happy with the Sentinel-1 format but extremely dissatisfied with that of Sentinel-3. In other words: If they were really interested in differentiated information on user satisfaction they should have asked the questions differently.

I would still encourage anyone who ever used Copernicus Sentinel data to participate in the survey. Even though things like those asked in the questions above are highly unlikely to change substantially and many important questions are not asked it is important to show that users have a differentiated opinion on matters and are not indifferent to aspects of quality of data and data access services.


June 26, 2017
by chris

Is smaller better? – Where the rubber meets the road with earth observation microsatellites

First here an introductory note on my policy for reviewing geodata products – since i occasionally receive questions along the lines of why don’t you review product X by company Y. I review things i find interesting and useful or which i consider a significant innovation. In case of satellite image mosaic products i looked at the work of Mapbox and Google in depth because when these were introduced they were something new and innovative no one had done before. I did however not discuss any of the various me too products introduced since then based on Landsat or Sentinel-2 data because none of them so far shows either a significant step up in terms of quality of the results or bringing any notable technical innovation with it.

I also focus on products that are open data or are based on open data of course. This is both because this is where i am most knowledgeable in and because these i think are of most interest for my readers.

With that in mind the products of Planet Labs would not normally be subject of a review by me. While they use open data satellite imagery and offer products based on this to their customers Planet Labs does not currently offer any open data products apparently. They have a product called Open California which is supposed to be under CC-BY-SA but this is not actually publicly available (which makes it look quite strongly like openwashing).

Planet Labs is the most prominent company that has grown out of the microsatellite hype of the past years and undoubtedly the most serious player in that field for earth observation purposes. They have developed and launched a significant number of very small satellites of just a few kilograms in weight over the last years but so far the only public service available from them is a program called Planet Explorer which is a short time (one or three month) near global satellite image mosaic based on data from these and their other larger satellites (coming from a purchase back in 2015). I am reviewing this here not because of the practical usefulness of the product itself (which seems rather limited) nor because it is technically innovative (which it might be on a basic data processing level but which it certainly is not in terms of image processing). I review this here as a contribution to a fact oriented public discourse on currently produced and available satellite imagery which obviously has to include Planet Labs images.

I want to clarify that this is not a review of the Planet Labs imagery itself. Planet Explorer does not even offer a full resolution view for the unregistered user and there are currently apparently no raw sample images available for Rapideye and Plantscope imagery so a real review is not possible, at least not without signing an NDA. This only discusses the corpus of imagery as shown in Planet Explorer and which i assume to be the bulk of useful imagery currently produced by the company (in this resolution range, Planet Labs recently also purchased higher resolution satellites data from which is not included there).

Not off for a good start

To get this out of the way first – Planet Explorer uses OpenStreetMap data for their map for labels, boundaries and other stuff in clear violation of the ODbL. They mention OpenStreetMap hidden under Terms which is kind of the internet equivalent of having it placed “in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying Beware of the Leopard”. The user does not commonly get to see this which is exactly what would be required by the ODbL.

You could also call it shooting yourself in the foot with regards to public relations with the open data community. Most people can and do respect if a company business model is based on licensing data and they therefore offer none of their data as open data but tolerance ends when the very same company is too cheap to even properly acknowledge use of data they use which has been generously made available by others as open data.

I will ignore that for the purpose of this review but everyone should keep this in mind when making use of Planet Labs services of course.

What does it tell us about the data?

The map shows us aggregates of the Planet Labs imagery in either monthly or three month intervals. This comprises three different types of images at the moment apparently:

  • Images from the Rapideye constellation which can be identified in the mosaic based on their relatively wide recording swath (77 km).
  • Images from the very small Planetscope satellites in sun synchroneous orbit with 24 km recording width and for which Planet Labs is mostly known.
  • Images from the Planetscope satellites in ISS orbit which can be distinguished from the former based on the slightly less wide swath (20 km) and the lower orbit inclination.

I won’t comment much on the assembly strategy they use – this is fairly non-sophisticated. Cloud detection and masking seems to be used for Rapideye imagery but not for the Planetscope data.

The more interesting part is coverage. Planet Labs has for a long time advertised their goal is daily coverage of the whole planet – which is of course meant for the land masses only. Their claims about to what extent they are actually doing this are always a bit fuzzy though – numbers usually appear to be theoretical goals and they speak of the ability to record a full daily coverage but do not say this is actually being recorded.


Coverage seems to be constrained between 57°S and 76°N, the northern limit however is quite clearly not a recording limit but a processing limit. These limits by the way happen to be approximately the extent of the known world until the 17th to 18th century. Of the low latitude areas where they record they achieve approximately 90-95 percent coverage in the monthly interval at the moment (with May 2017 being the last month covered so far). It is possible that part of this is because they do not include images in processing because they are completely cloud covered and their actual coverage is slightly better. This is still pretty far away from a daily global coverage but does not preclude the possibility that their daily coverage in terms of recorded area is close to or above the total land area on earth. For the latter you just need sufficient recording capacity or in other words: enough satellites. For actual full coverage you would need to have these recordings distributed uniformly across the earth land surfaces which is a whole other problem.

Their recording strategy at the moment looks rather odd with seemingly random gaps in the recordings. I of course don’t know what technical constraints exist with these very small satellites, how specifically they can task the recordings and how far they depend on having a receiving station close-by. And keep in mind that they can’t really maneuver these satellites so they have very limited control over where the satellites look at a certain time. Imagine playing darts with a very low skill level and having the task to hit all regions of the dart board at least once. You need many more darts to accomplish this than there are regions on the board because you end up hitting many parts several time before achieving full coverage.

As an example here a view of their May 2017 image of southern Germany.

This has 100 percent coverage but as you can clearly see there is a strip of cloud affected images in the center part of the area indicating they don’t have any May 2017 recordings of this area without clouds (or a really bad image quality assessment as a basis of mosaic assembly – which however seems unlikely). If i look over the weather in May (based on MODIS and VIIRS imagery for example) there are at least four days with good weather in the morning to noon time frame that would have allowed for better images in the area (May 10, 17, 26 and 27). Here a quick assembly of the same area based on Sentinel-2 data from May 10 and May 27.

That i can produce this from Sentinel-2 data with a ten day recording interval is pure luck. But it shows that while the number of images recorded by Planet Labs in the area maybe could cover it completely every day based on sheer numbers the images actually recorded clearly do not by a fairly big margin. And this is at a latitude where due to the orbit geometry you already have on average a much higher potential recording density than at the equator.

Is smaller better?

From this specific analysis of the current Planet Labs offerings and capabilities i now get to the ultimate more general question – is a large number of small satellites recording a relatively narrow field of view better or worse than a small number of larger satellites with a wider field of view?

Note although the way i phrased this question is independent of the recording resolution in reality this is not the case of course – higher resolution satellites tend to have a more narrow field of view. Here a few examples:

Satellite mass recording width resolution width in pixel (approximate)
Landsat 1500 kg 190 km 15 m 13000
Sentinel-2 1100 kg 290 km 10 m 30000
Rapideye 156 kg 77 km 6.5 m 12000
Planetscope 6 kg 24.6 km 3.7 m 6600
Skysat 83 kg 8 km 0.9 m 8000
Pleiades 970 kg 20 km 0.7 m 30000
WorldView-4 2500 kg 13.1 km 0.31 m 42000

For comparing resolutions note the Planetscope satellites are the only ones with a Bayer pattern sensor (so the specified resolution is only for all spectral bands in combination).

A very wide recording width like with Sentinel-2 causes additional problems with positional accuracy and varying illumination and viewing conditions across the field of view of course, this is not what i want to talk about here though. Without these problems there are mainly the following factors:

  • Satellites recording a smaller view (in terms of viewing angle as well as for the width in pixel) can be built cheaper and more light-weight. This is the main reason for the small field of view of Planetscope.
  • Smaller individual images allow more fine grained targeting of recordings – either on specific places or on good weather windows. In other words: if your recording planning is good smaller images will have a smaller average cloud coverage.
  • Smaller images mean more edges between images and more problems with discontinuities in the data and assembly.
  • Developing and building a larger number of smaller satellites can be significantly less expensive than building a single large satellite. Management of risks of failures during launch and operations is also easier.
  • Recording in high resolution requires a certain minimum size of the optics which puts a hard constraint on the size of the satellite.
  • Recording in the longer wavelength infrared (SWIR/TIR) requires cooling equipment that cannot be easily miniaturized.

As you can see there are pros and cons for both. Additional factors play in if you want to specifically target certain areas – which is what all current very high resolution systems do. I only look at things for the purpose of routine large area coverage.

If you had 16 Landsat satellites and you would properly line up their orbits for this purpose you could record a solid daily coverage (yes, you would of course need to significantly extend the ground based infrastructure for this as well). Based on just the field of view (which is a strongly simplified way to look at it) you could do the same with with just about 16*190/24.6 = 124 Planetscope satellites if you (a) can operate them on the same duty cycle (the same recording duration per orbit) – which could be realistic although current operations do not demonstrate this and (b) if you could perfectly and permanently align their orbits relative to each other – which you can’t because they lack propulsion and the options they have with controlling atmospheric drag probably do not give you sufficient control for that. Hence they would need a significantly larger number of satellites, probably several times this number – for a true daily global coverage.

My prediction is that if Planet Labs stays in business for a longer time with their current business model and the aim to provide continuous coverage of larger areas world wide on a daily basis they will probably add some form of propulsion to their satellites sooner or later.


June 16, 2017
by chris

Mapping coasts and the tidal zone

With the recent introduction of additional imagery layers for the purpose of mapping in OpenStreetMap by DigitalGlobe significantly more source material is now available for remote mapping in OSM. However in many, especially remote areas my OSM images for mapping still provide the most recent image source readily available for mappers and in quite a few areas also the best overall. And even in areas where recency of images is not that important and where Bing and DigitalGlobe offer good quality images an additional independent image source can be very useful for interpretation.

I added a few additional images now with a focus on coastal areas and tidal flats. Areas with changing water levels are something where open data imagery is of particular use even if higher resolution images are available from other sources because you can specifically select high and low water levels and are thereby able to accurately map the coastal features while in higher resolution image sources you tend to have more or less random water levels in the images and essentially need to map based on guesswork if you do not have additional sources of information.

Mapping coasts and the tidal zone in OpenStreetMap is not that difficult, here the basics:

Much of this is of course rather difficult to assess without local knowledge so be careful when mapping just from the distance and familiarize yourself with the area in question before you do so. In many of the areas i show in the following at least the basics, delineating the coastline and mapping the tidal flats, are not that difficult to do though.

You can find some more details on coastal mapping in another blog post about beaches and reefs.

Bahía Blanca

Bahía Blanca is the name of a city as well as a bay in Argentina and features one of the largest tidal wetlands in South America which is currently quite poorly mapped in OpenStreetMap. I added images featuring a low and high water level.

Bahía Blanca low tide

Bahía Blanca high tide

Note these are from different times of the year so differences in color are not exclusively due to the tidal cycle. The whole area is also covered in high resolution image sources but with randomly varying water levels so accurate mapping is quite difficult just based on these.

Cook Inlet

Cook Inlet is the large bay in southern Alaska which features quite large tidal flats at the northern end near Anchorage.

This late summer image also allows mapping in the mountains around the bay. The area is partly covered by high resolution image sources but largely from less than optimal recording dates.

Bogoslof Island

Also located in Alaska is Bogoslof Island where a volcanic eruption recently changed the shape of the island quite significantly. See also here.

Northern Dvina delta

The situation for the Northern Dvina delta near Arkhangelsk is similar to that of the Cook Inlet although existing mapping on land is already much better here. I also provide a low tide image for this area that should allow adding details of the tidal zone.

Aral Sea

Finally i also have two images of low and high water levels of the Aral Sea which is of course not a sea but a lake. Exact water levels vary significantly from year to year but these images will at least roughly indicate which are permanent and and which are intermittent water areas at the moment.

Aral Sea low water

Aral Sea high water

There is some residual ice on the water in the northern part of the high water level image that should not be mistaken for something else. Note the right tagging for seasonally water covered areas is natural=water + intermittent=yes or seasonal=yes, not natural=wetland – even if intermittency of water areas is not currently shown in the standard map style.

June 16, 2017
by chris

Public request for input on future Landsat requirements

The NASA and USGS are now seeking input on requirements for future Landsat missions from all data users. Quoting from the RFI:

The U.S. Geological Survey (USGS) Land Remote Sensing Program has collected a diverse set of U.S. Federal civil user measurement needs for moderate-resolution land imaging to help formulate future Landsat missions. The primary objective of this RFI is to determine if these needs are representative of the broader Landsat user community, including, but not limited to, private sector, government agencies, non-governmental organizations and academia, both domestic and foreign. Responses to this RFI will be considered along with other inputs in future system formulation.

This is quite remarkable. Usually parameters of such projects are decided on almost exclusively between public institutions. If input is sought from the general public this tends to be in the form of multiple choice questionnaire which are often set up to lead to a specific result and are then interpreted to that goal as well. This however looks a bit different, they are asking for free form answers to a number of specific but open questions and specifically ask not only what you want but also for your reasons why you think this would be good to have.

There is no guarantee of course that any of this will actually have an effect on future Landsat plans but i would still urge anyone routinely using Landsat data, who understands the questions and feels qualified to give answers to them to send in their thoughts. So far the interests that went into future Landsat planning were probably almost exclusively from government and scientific institutions as well as likely a few larger corporations. And interests of these are not necessarily the same as those from the broad range of smaller private sector users, independent scientists, community projects or similar things. If you belong to any of these underrepresented groups, are a Landsat data user and are reading my posts here on satellite image related topics with interest and not just skim them for interesting images there is a good chance you could provide useful input here.

Answers should be sent before July 14.


June 4, 2017
by chris

Open data satellite image news

Here a few more news from the field of open data satellite images:

  • in reference to my recent report and the missing recordings of Sentinel-2 imagery – ESA seems to have “found” some images and the formulation they use in the status reports is something i am going to save for future use:

    The ground segment has suffered a sporadic anomaly between March and May, leading to an incomplete dissemination of the production with about 11% products missing throughout the period.

    I mean like: what do you mean by tax evasion, my bookkeeping suffered a sporadic anomaly last year… I updated the coverage illustrations a few days ago including what is newly available now.

  • the USGS has updated their EarthNow! live Landsat viewer – not to be confused with Mapbox Landsat Live (which is not really live). The new version finally shows true color renderings. While this rarely shows a true live feed – you mostly get recordings from a few hours back – it is a nice illustration how satellites actually record imagery and the only place AFAIK where you can actually see current Landsat Level 0 data.
  • the USGS is now distributing some images from the Indian IRS-P6/Resourcesat 1 satellite and followup mission Resourcesat 2. Images are mostly for the US only and from two instruments: LISS-3 and AWiFS. Quality of this data is pretty good but features relatively limited spectral ranges with only red, green NIR and a single SWIR band. AWiFS is quite interesting as an intermediate between the higher resolution and low revisit frequency systems like Landsat and Sentinel-2 and the low resolution high revisit systems like MODIS, VIIRS and Sentinel-3.

    Here examples from the western United States in approximated true color with estimated blue (as i have shown previously for ASTER).

ISRO Resourcesat 2 AWiFS example

ISRO Resourcesat 2 LISS-3 example

LISS-3 full resolution crop


June 3, 2017
by chris

Arctic places in spring

A few more satellite images from spring in the Arctic – this time featuring the northmost settlements on Earth.

The northmost permanent settlement on the planet is and has been for a long time the weather station and military base at Alert in northern Canada on the northeastern end of Ellesmere Island at 82.5 degrees north.

Alert, Ellesmere Island

Next, about one degree further south, is Station Nord in northeastern Greenland, a Danish military post. In contrast to all other places shown which are near the coast and accessible by ship in summer this is inaccessible all year round in most years due to sea ice and all supplies need to be brought in by air.

Station Nord, Greenland

Again nearly a degree further south at 80.8 degrees north is Nagurskoye on Alexandra Land, Franz Josef Land. This military outpost was significantly extended in recent years – i showed an image of supply operations two years ago.

Nagurskoye, Franz Josef Land

All these three northmost settlements have been established in the 1950s during the cold war. They are all military stations with restricted access. The northmost settlement open to everyone to visit is Ny-Ålesund on Svalbard which is mostly used for scientific research.

Ny-Ålesund, Svalbard

Also on Svalbard slightly further south is Longyearbyen which is the northmost bigger settlement on Earth with a population of more than 2000. On the plateau south of the airport on the left side of the image you can see the antennas of the Svalbard Satellite Station which receives a significant fraction of the satellite images i show here.

Longyearbyen, Svalbard

Both these Svalbard settlements are more than 100 years old, much older than the ones further north established during the cold war. And in contrast to the other places where all ressources need to be brought in from abroad the Svalbard settlements are powered by locally mined coal.

Last – and no more competing for a record latitude in any way but kind of significant for balance around the pole here another image of the Russian military base on Kotelny Island which – like Nagurskoye – has been extended significantly in recent years.

Темп, Kotelny Island

Locations of all these places can also be found on the following map.

All images based on Copernicus Sentinel-2 data from April and May 2017.


May 26, 2017
by chris

Grounded sea ice in the Arctic

I mentioned the phenomenon of grounded sea ice in the Kara Sea a few years back. Here a recent image of the same area in a wider view of the whole northeastern Kara Sea showing this still happens in 2017 at the same places.

But this is not the only area in the Arctic Ocean where sea ice gets in contact with the ocean floor at places away from the coast and is thereby fixed in place and does not move with the general ice drift in the area any more. The most famous area of this kind is the Norske Øer Ice Barrier off the East Greenland coast. The special thing about this is that the ice in parts is semi-permanent here, it only breaks up completely in some years.

At this time the ice barrier forms a continuous solid area of ice together with the freely floating land fast sea ice closer to the coast. How this typically looks like in summer can be seen in my Greenland mosaic. Another places where grounded sea ice is well visible at the moment is the East Siberian Sea. Here an image from March of the area north of the Medvezhyi Islands.

And here the same area a few days ago.

All images based on Copernicus Sentinel-3 OLCI data.