Archangelsk in Winter

February 28, 2018
by chris

Northern European Winter

Here two Winter impressions from Northern Europe – the first is from Northern Russia showing the city of Archangelsk:

You can see the frozen Northern Dvina River and the likewise mostly frozen White Sea. Well visible in the low sun is also the smoke from the power plants in the area.

Here a magnified crop:

The second image is from northwestern Scotland:

This image does not only feature snow cover in the mountains but also shows a remarkable color combination with the dark green of forested areas contrasting with the brown colors of the non-wooded parts of the hills and mountains.

Both images are based on Sentinel-2 data and can be found in the image catalog on

Saunders Island, South Sandwich Islands by Sentinel-2A

February 23, 2018
by chris

Satellite image news

Some news on open data satellite images:

I updated the satellite image coverage visualizations. Here is the matching coverage volume plot over time:

Open data satellite image coverage development

There are several important things you can see in that:

  • With Landsat 8 the USGS has for the second southern hemisphere summer season adopted a changed acquisition pattern (indicated by a drop in image volume around December/January) where Antarctic coverage is significantly reduced compared to previous years (see my yearly report for more details)
  • There have been significant fluctuations in the acquisition numbers of the Sentinel-2 satellites. Much of this is related to an inconsistent approach to the Antarctic here as well – with ESA sometimes acquiring Antarctic images with one of the satellites for a few weeks and then dropping it again. A consistent long term plan is not recognizable here.
  • In the last weeks Sentinel-2B acquisitions have been ramped up to full coverage in the nominal 10 day revisit interval (compared to the fairly arbitrary pattern with 10 days for Europe, Africa and Greenland, 20 days for the rest). See the sample of a 10 day coverage below. This is good news.
  • The problems with missing aquisitions and individual tiles are still the same as before as indicated by the orange areas in the visualizations.

Full 10 day coverage by Sentinel-2B in early 2018

Another thing that changed is that ESA seems to have made a smaller change to the Sentinel-2A acquisition pattern including the South Sandwich Islands now. Here an example of a rare nearly cloud free view of Saunders Island:

Saunders Island, South Sandwich Islands by Sentinel-2A

Interestingly this is limited to Sentinel-2A – Sentinel-2B so far has not acquired any South Sandwich Islands images. Like with the Antarctic there does not seem to be a consistent plan behind this which makes this very unrealiable for the data user and kind of another wasted opportunity of establishing Sentinel-2 as a reliable data source.

February 11, 2018
by chris

On imitated problem solving

As many of you know for a few years now we have a new trend in remote sensing and cartography that is called Artificial Intelligence or Machine Learning. Like many similar hypes what is communicated about this technology is little based on hard facts and largely dominated by inflated marketing promises and wishful thinking. I here want to provide a bit of context to this which is often missing in discussion on the matter and which is important to understand when you consider the usefulness of such methods for cartographic purposes.

AI or Machine Learning technologies are nothing new, when i was at University these were already pretty established in information sciences. The name has been misleading from the beginning though since Intelligence and Learning implies an analogy to human intelligence and learning which does not really exist.

A good analogy to illustrate how these algorithms work is that of a young kid being mechanically trained: Imagine a young kid that has grown up with no exposure to a real world environment. This kid has learned basic human interaction and language but no significant experience in the larger world and society beyond this.

Now you start watching TV with that kid and every time there is a dog on screen you call out Oh, a dog and encourage the kid to follow your example. And after some time you let the kid continue on its own as a human dog detector.

This is pretty much what AI or Machine Learning technologies do – except of course that the underlying technological systems are still usually much less suited for this task than the human brain. But that is just a gradual difference and could be overcome with time.

The important thing to realize is that this is not how a human typically performs intellectual work.

To use an example closer to the domain of cartography – imagine the same scenario with the kid above with detecting buildings on satellite images. And now consider the same task being performed by a diligent and capable human, like the typical experienced OpenStreetMap mapper.

The trained kid has never seen a real world building from the outside. It has no mental image associated with the word building called out by its trainer except for what it sees on the satellite images.

Experienced OSM mappers however have an in depth knowledge of what a building is – both in the real world as well as in the abstract classification system of OpenStreetMap. If they see an empty swimming pool on an image they will be able to deduce that this is not a building due to the shadows – even if they have never seen a swimming pool before. This typical qualified human interpretation of an image is based on an in depth understanding of what is visible in the image connecting it to the huge base of real world experience a human typically has. This allows humans to solve specific problems they have never been confronted with specifically before based on knowledge of universal principles like logic and the laws of physics.

As already indicated in the title of this post in a way AI or Machine Learning are the imitation of problem solving in a cargo cult like fashion. Like the kid in the example above who has no understanding of what a dog or a building is beyond the training it receives and tries to imitate afterwards. This is also visible from the kind of funny errors you get from this kind of system – usually funny because they are stupid from the perspective of a human.

Those in decision making positions at companies like Facebook and Mapbox who try to push AI or Machine Learning into cartography (see here and here) are largely aware of these limitations. If they truly believed that AIs can replace human intelligence in mapping they would not try to push such methods into OSM, they would simply build their own geo-database using these methods free of the inconvenient rules and constraints of OSM. The reason why they push this into OSM is because on their own these methods are pretty useless for cartographic purposes. As illustrated above for principal reasons they produce pretty blatant and stupid errors and even if the error rate is low that usually ruins things for most application. What would you think of a map where one percent of the buildings are in the middle of a road or river or similar? Would you trust a self driving car that uses a road database where 0.1 percent of the roads lead into a lake or wall?

What Facebook & Co. hope for is that by pushing AI methods into OSM they can get the OSM community to clean up the errors their trained mechanical kids inevitably produce and thereby turn the practically pretty useless AI results into something of practical value – or, to put it more bluntly, to change OSM from being a map by the people for the people into a project of crowd sourced slave work for the corporate AI overlords.

If you follow my blog you know i am not at all opposed to automated data processing in cartography. I usually prefer analytical methods to AI based algorithms though because they produce better results in case of the problems i am dealing with. But one of the principles i try to follow strictly in that regard is never to base a process on manually post processing machine generated data. The big advantage of using fully automated methods is that you can scale them very well. But you immediately loose this advantages if you start introducing manual post processing because this does not scale in the same way. If you ignore this because crowd sourced work from the OSM community comes for free that indicates a pretty problematic and arrogant attitude towards this community. Computers should perform work for humans, not the other way round.

If you are into AI/machine learning and want OSM to profit from it there are a number of ways you can work towards this in a constructive way:

  • make your methods available as open source to the OSM community to use as they see fit.
  • share your experience using these methods by writing documentation and instructions.
  • make data like satellite image available under a license and in a form that is well suited for automated analysis. This is particular means:
    • without lossy compression artefacts
    • with proper radiometric calibration
    • with all spectral bands available
    • with complete metadata
  • develop methods that support mappers on solving practically relevant problems in their work rather than looking for ways to get mappers to fix the shortcomings of the results of your algorithms.

In other words: You should do exactly the opposite of what Facebook and Mapbox are doing in this field.

I want to close this post with a short remark regarding the question if we will in the future get to have machines that can perform intelligent work significantly beyond the level of a trained kid? The answer is: We already have that in the form of computer programs programmed to solve specific tasks. The superficial attractiveness of AI or Machine Learning comes from the promise that it can help you solve problems you might not understand well enough to be able to specifically program a computer to solve them. I don’t consider this something that is likely to happen in the foreseeable future because that would not just mean reproducing the complex life long learning process of an individual human being but also the millennia of cultural and technological evolution of the human society as a whole.

What is well possible though is that for everyday tasks we will in the future increasingly rely on this kind of imitated problem solving through AIs and this way loose the ability to analyze and solve these problems ourselves based on a deeper understanding in the way described above. If that happens we would obviously also loose the ability to recognize the difference between a superficial imitated solution and a real in depth solution of the problem. In the end then a building will simply be defined as that which the KI recognizes as a building.

Western Alps autumn colors 2017

January 27, 2018
by chris

Mapping imagery additions

Over the last days i added a number of images to the OSM images for mapping produced from Sentinel-2 and Landsat data.

There are three new images for the Antarctic:

McMurdo Bay area

This image covers the McMurdo Sound, McMurdo Dry Valleys and Ross island. Data is from February 2017 – end of summer, but with quite a bit of seasonal sea ice cover still present.

There is a lot that can be mapped from this image in terms of topography, glaciers and other things. It can also be used to properly locate features where you only have an approximate position from other data sources. If you compare the image with existing data in OSM you will also see that there is significant mismatch in many cases. Positional accuracy of the image – like the other Antarctic images – is good but not great. In mountainous areas at the edge of the image swath (here: in the northwest) errors can be more than 50m probably on occasion but otherwise will usually be less.

Bunger Hills

Another part of the East Antarctic coast. This requires a bit of experience to distinguish between permanent and non-permanent ice. But existing mapping in the area is poor so there is a lot of room for improvement.

Larsen C ice shelf edge

This is an image for updating the ice shelf edge after the iceberg calving in 2017. The current mapping in OSM here is very crude since based on low resolution images.

Western Alps autumn colors

And then there is another image which is more of an experiment. This is an autumn image from the western Alps that shows autumn colors and could be helpful for leaf_cycle mapping. I am not quite sure how well this works. You probably need some level of local knowledge to be able to interpret the colors correctly. The red-brown colored forested areas are usually deciduous broadleaved forest, in many cases beeches. Larches are more yellow in color and are often mixed with other types of trees which makes them more difficult to identify. Also the different types of trees change their colors at different times – also depending on the altitude – so a single image does not really cover everything and a solid local knowledge is probably important not to misinterpret the colors.

I would be interested in feedback about to what extent this image is useful for mapping leaf_cycle.


January 21, 2018
by chris

On permanence in IT and cartography

Many on my readers probably have heard about the company Mapzen closing down. In that context the Mapzen CEO Randy Meech has published (or more precisely: re-published) a piece on volatileness and permanence in tech business which reminded me of a subject i had intended to write about here for some time.

When i started publishing 3d geovisualizations more than ten years ago these were unique both technically and design wise. By my standards today these early works were severely limited in various ways – both due to my lack of knowledge and experience on the matter and due to the limits in quality of available data and the severe limitations of computer hardware at that time. But at the same time they were in many ways miles ahead of what everyone else was producing in this field (and in some ways still are).

An early 2006 3d Earth visualization from me

Today, more that ten years after these early works, a lot has changed in both the quality of the results and in the underlying technology. But there are also elements that stayed almost the same, in particular the use of POV-Ray as the rendering engine.

A more recent view produced in 2015

Randy in his text contemplated about the oldest companies of the world and if you’d assemble a list of the oldest end user computer programs still in use POV-Ray would be pretty far up with its roots going back to 1987. Not as old as TeX but still quite remarkable.

What makes programs like TeX or POV-Ray prevail in a world where in both cases there has been – in parallel or subsequently – a multi-billion dollar industry established in a very different direction but in a way competing for the same tasks (typesetting text and producing 3d renderings respectively)?

The answer is that they are based on ideas that are timeless and radical in some way and they are none the less specifically developed for production use.

In case of POV-Ray the timeless, radical idea was backwards raytracing in purity. There were dozens of projects following that idea mostly in the 1990s in the field of computer science research but none of them was actually seriously developed for production use. There were also dozens of both open source and proprietary rendering engines being developed for production use making use of backwards rendering techniques but all of them diluted the pure backwards rendering idea because of the attractiveness of scanline rendering centered hardware accelerated 3d as it during that time dominated the commercially important gaming and movie industries.

Because POV-Ray was the only pure backwards renderer it was also the only renderer that could do direct rendering of implicit surfaces. Ryoichi Suzuki, who implemented this by the way indicated back in 2001 that this was based on an idea originally implemented 15 years ago which makes this over 30 years old now. The POV-Ray isosurface implementation is the basis of all my 3d Earth visualizations.

In the grand scheme of overall cultural and technological development ten years or 30 years are nothing of course. Eventually POV-Ray and my 3d map design work are almost certainly destined for oblivion. And maybe also the underlying timeless, radical ideas are not as timeless as i indicated. But what you can say with certainty is that the short term commercial success is no indicator for long term viability and significance of an idea for the advancement of society.

Going more specifically into cartography and map design technology – which most of my readers are probably more familiar with – companies like Mapbox/Google/Here/Esri etc. are focused on short term solutions for their momentanous business needs – just like most businesses looking into 3d rendering in the 1990s found in scanline rendering techniques and its implementation in specialized hardware a convenient and profitable way to do the low quality 3d we all know from this era’s computer games and movies.

Hardly anyone, at least no one in a position of power, at a company like Google or Mapbox has the long term vision of a Donald Knuth or an Eduard Imhof. This is not only because they cannot attract such people to work for them but primarily because that would be extremely dangerous for the short term business success.

Mapzen has always presented itself as if it was less oriented for short term business goals than other companies and maybe it was and this contributed to its demise. But at the same time they did not have the timeless and radical ideas and the energy and vision to pursue them to create something like TeX or POV-Ray that could define them and give them a long term advantage over the big players like Google or Mapbox. What they produced were overwhelmingly products following the same short term trends as the other players do in a lemming-like fashion. Not without specific innovative ideas for sure but nothing radical that would actually make it stand out.

Mapzen published a lot of their work as open source software and this way tries to make sure it lives on after the company closes. This is no guarantee however. There are tons on open source programs dozing away in the expanses of the net no one looks at or uses any more.

While open sourcing development work is commendable and important for innovation and progress – TeX and POV-Ray as individual programs would have never lasted this long if they had not been open source – it is important to notice that the deciding factor ultimately is if there is actually

  • a substantially innovative idea being put forward,
  • this idea being consequently developed to its real potential,
  • this idea being implemented and demonstrated in practical use,
  • the idea being shared and communicated publicly and
  • the idea brings substantial cultural or technological advancement over pre-existing and near future alternatives – which unfortunately can, if at all, usually only be determined in retrospect.
waterbody and ford rendering in the alternative-colors style

December 10, 2017
by chris

Water under the bridge

When i wrote about rendering of footways/cycleways in OpenStreetMap based maps recently i indicated there are other changes i made in the alternative-colors style that deserve some more detailed explanation and here i am going to introduce some of them related to waterbody rendering.

Waterbodies in the standard style (and similarlys in nearly all other OSM based maps) have always been rendered in a relatively simple, not to say crude way. Every water related feature is drawn in the same color, water areas traditionally starting at z6, river lines at z8 and streams and smaller artificial waterways at z13. The z8 and z13 thresholds are so firmly established that mappers often decide how to tag waterways specifically to accommodate these thresholds. Since the smaller artificial waterways (ditch and drain) are rendered slightly thinner than streams these tags are frequently abused to map smaller natural waterways. The only significant styling specialty in this traditional framework is that the small waterways starting at z13 get a bright casing so they are better visible on dark backgrounds.

Some time ago a change was introduced to render intermittent waterways with a dashed line. While this seems like a logical styling decision it turned out to work rather badly because of the problems of dashed line styles in combination with detailed geometries as i already explained in context of the footway rendering.

This is the situation that forms the basis of the changes i am going to write about here.

Differentiating waterbody types

As indicated above traditionally the OSM standard style renders all water features in the same color. This color was changed some time ago but it is still one single color that is used for everything – from the ocean to the smallest streams and ditches.

This all one color scheme does not require mappers to think about how they map waterbodies specifically, they can just paint the map blue so to speak. In particular with water area tagging this has lead to a lot of arbitrariness and relatively low data quality in the more detailed, more specific information. As i pointed out in the context of waterbody data use the data cannot really be used for much else than for painting waterbodies in a uniform color. At the same time this makes life very easy for map designers of these relatively simple maps since you don’t have to worry about drawing order or other difficulties.

More specific information about waterbodies would however be very useful for data users so it makes sense to render it to encourage mappers to be more diligent with recording such information. And differentiating different types of waterbodies can help a lot creating a better readable map since what color and styling works best varies depending on the type of waterbody. And since blue color is widely reserved for water related features anyway differentiating by color is well possible.

The basic three types of waterbodies i am differentiating are:

  • the ocean
  • standing inland waterbodies (primarily lakes)
  • flowing water (both line and polygon features)

Water colors for ocean (left) standing inland water (middle) and flowing water (right)

This coloring scheme is also visible in the low zoom demo i showed recently.

Rivers use the strongest and darkest color so they are well visible even on strong and structured background while the ocean uses a brighter color not to be too dominating over land colors given that it covers a large area.

visibility of darker river color on dark background

differentiating standing and flowing water at the Rhine

In addition to differentiating by physical type of waterbody for line features i also distinguish between natural and artificial waterways in a relatively subtle form using a slightly brighter blue centerline at the higher zoom levels.

canal rendering with subtly brighter centerline

drain and stream rendering at z18 in comparison

Use of subtlety is of fundamental importance if you want to create a rich map that it still well readable. This distinction between natural and artificial waterways is strong enough to be clearly recognized by the keen observer but at the same time it is not adding a lot of noise that would affect the readability of the map otherwise.

Intermittency of waterbodies

current rendering in the standard style of intermittent rivers at z10

As indicated above the standard style already differentiates intermittent waterways but not in a very good way. I tested various options and ultimately came up with the following approach

  • intermittent waterways start one zoom level later and are slightly thinner than perennial ones at the first zoom levels.
  • at z12-z13 intermittent rivers get a bright color centerline. This is fairly well visible and works much better with detailed geometries than dashing. At z14 and above i use dashing for rivers but with very small gaps between the dashes so the line it still well visible as a continuous geometry. Streams, ditches and drains are rendered with a similar dashing from z13 upwards.
  • intermittent standing water areas get a blue grain pattern with a transparent base so underlying landcover rendering is visible.
  • intermittent flowing water areas get a bright grain pattern on a blue base starting at z14. This ensures the geometry outline is still well visible which is fairly important for readability in case of riverbanks.

intermittent waterway rendering at z13 with bright centerline for rivers and dashing for streams

intermittent riverbank polygons at z15 in combination with intermittent streams and rivers

intermittent lakes at z10

In addition for waterbodies with salt water (salt=yes) the ocean color is used in combination with a weak bright grain pattern. An abstract demo of all of these together here:

intermittent water rendering in the alternative-colors style at z14 – click to see the z15 version

Other changes

In addition to the more fundamental changes described above i also did a lot of tuning for the line widths and other rendering parameters for a more balanced relationship between the different feature types and a more continuous change in appearance when zooming in or out.


Not directly connected to the waterbody changes but still somewhat related – i added rendering of fords. These are shown in the standard style as POIs with an icon starting at z16 which is a fairly unfortunate way of rendering them because:

  • the icon covers the most interesting and most important area of the actual crossing.
  • the icon is rendered for anything that is tagged ford=yes – this can be a big highway or a small footway – or anything else for that matter where the ford tag does not make any sense.
  • z16 is way too late to be of help to the map user in many cases.

POI rendering of fords – a lot of visual noise carrying very little useful information

In other words: This kind of rendering in many situations does not really improve the map.

I used a different approach by rendering fords similar to bridges – after all a ford is a highway crossing a waterway without a bridge. The difficulty is that fords can be tagged on a node while bridges are by convention always mapped as ways. Rendering node based fords similar to bridges requires quite a bit of effort and i am afraid this significantly adds to the already complex road code. But i think the visual results make it worth it.

fords mapped as nodes for footways, tracks and minor roads

As you can see this is usually intuitively recognizable as a ford and the crossing geometry is not obscured by a big and distracting icon.

ford rendering at z15 for various highway types – click for z16 version

December 1, 2017
by chris

OSMF board elections

The OpenStreetMap Foundation tomorrow is going to open board elections for this year’s Annual General Meeting for two seats on the OSMF board. If you are a member of the OSMF i would strongly urge you to vote. If not you might want to consider becoming a member (which however will not allow you to participate in these elections – for that you have to be a member a month before the elections).

The reason why this is of particular importance this year is because this year’s candidates for the positions on the board offer in parts fairly contrasting positions on the direction on the OSMF and the OpenStreetMap project in general. You can get an idea of the ideas and views of the candidates in the Q&A on the OSM wiki but you also need to read between the lines because candidates have partly picked up the bad habit from big politics of talking much without saying anything of substance. Sometimes the way how the candidates deal with questions they do not like is more revealing than the actual answers.

Of course replacing two of seven board members will not immediately change the whole OSMF but due to the quite contrasting views and backgrounds of the candidates it will be a significant message in terms of what direction the members support and this way will probably weigh significantly also on the other board members.

Of course even a fundamental change in direction of the OSMF would not necessarily have much influence on the OpenStreetMap project as a whole. One of the most remarkable aspects of OpenStreetMap is how little it depends on central organization and management. But of course if the OSMF and the OSM community start diverging significantly in goals and direction this could create a lot of friction.

Landsat Winter Alaska 2017

November 15, 2017
by chris

Into the light

I have a somewhat different satellite image than usual here:

This is a strip of nine Landsat scenes recording parts of Alaska in early Winter from a few days back. I rotated this to align roughly with the satellite recording direction and you need to scroll down to see the whole image. As you scoll down you move from the limit of the polar night at the northern end towards the southwest and towards the sun across about 1500km.

You will notice a slight bend in the image when doing that – this is because the image coordinate system is not actually aligned to the satellite orbit but a simple oblique mercator projection. Due to the satellite’s sun synchroneous orbit however the satellite ground path is not actually a great circle but kind of spirals around the earth following the sun.

The southern end of this image strip is defined by the end of the Landsat recording which does not extend over the open ocean (it is Landsat after all). The northern end is the limit of normal Landsat recordings at this time of year due to the low sun position.

Here a Sentinel-3 OLCI image from the same day (and this time with north up orientation – also allowing you to identify where exactly the first image is placed) showing a much tighter northern limit.

And for comparison here a false color infrared Sentinel-3 SLSTR image where no recording limit is imposed showing the actual limit of light – but of course not in natural colors.

The two Sentinel-3 images also show an impressive cloud of dust extending SSW from the delta of the Copper River in southern Alaska at the right side of the image. Here a larger view to show this better.

And finally two crops from the first image – the first one from the north showing how you can watch the rivers freezing over at this time of the year near Fort Yukon.

and the second from the south showing the indeed very windy yet sunny weather at Tugidak Island in the south.

November 10, 2017
by chris

Satellite image news

Some news that might be of interest for some of my readers – without any attempt of completeness.

  • The Sentinel-2 package format has changedagain. This change is rather small and will not significantly affect most users. The interesting and funny thing however is the second time stamp saga from the previous change now seems to have gained another twist – now the second time stamp is specialised to ensure a deterministic repeatable name across time for the same product. In other words: It does not have a meaning any more, it is just there to be able to distinguish between several packages with different data but otherwise the same name (which can happen at data strip boundaries).
  • Another thing on Sentinel-2 – it has not been widely advertised but there is a data quality report on Sentinel-2 data updated in more or less regular intervals here. You need to be careful when reading this of course. Regular updates do not mean all information in that report is complete up-to-date. And you have to know how to interpret the information given. Take for example the absolute geolocation accuracy (which i have written about recently as well) – This can only be reliably measured for areas where you have accurate reference data – which does not usually include regions where accuracy tends to be bad. So the <11m at 95.5% confidence is likely not based on an unbiased set of reference locations. The reference locations are not published of course – neither is the source of the reference data used.
  • The USGS is starting to introduce what they call Landsat Analysis Ready Data. This essentially means Landsat imagery reprojected to a common coordinate system for the Unites States and distributed in tiled form. I am not going to review this data since i think this kind of product is conceptually and technically a dead end. It is by definition a regional data product they cannot extend to global coverage and performing double resampling (from the raw Level 0 data to the UTM grid of the orthorectified Level 1 product and then again to the Albers Equal Area projection of the ARD grid) is wasteful and unnecessary. There are obviously advantages for processing and using data in a common grid for larger regions but if the solution for that limits you to areas within the Unites States that is not really a universally usable approach.
  • In the field of commercial earth observation Planet Labs has launched six new SkySat satellites – those are the somewhat larger satellite systems from their acquisition of Terra Bella from Google. I briefly mentioned them in my discussion of Planet Labs some time ago. There is very little information publicly available on actual operation of these satellites. They claim a recording capacity of 185k km^2 per day for the whole fleet of 13 of these satellites. That is not much. With a recording swath width of 8km that amounts to less than 2000km recording length per day per satellite or about 20 seconds of recording per orbit. If this is to be increased in the future is unknown but at the moment it seems that these satellites – being positioned to record at different times of the day and together with a monochrome video recording ability – are mostly intended for what you might call event photography from space.
  • There are two upcoming launches of Earth observation satellites – for November 14 there is the planned launch of JPSS-1 which carries a second VIIRS instrument in addition to the one on Suomi NPP launched in 2011. And in late December there is the planned launch of GCOM-C. Both have been subject to delays – JPSS-1 was originally supposed to launch in 2016, GCOM-C in 2014.

I updated my satellite sensor chart accordingly. Note i still could not get myself to specify a full coverage interval for the PlantScope satellites. They now show a decent monthly coverage of >90 percent between -60 and 75 degrees latitude for the combination of RapidEye and PlantScope but full coverage means full coverage for me. And demo or it did not happen.