I produced a new cloud free satellite image mosaic of Crete in Winter:
The image is based on Landsat data from 2016 to 2018.
You can find it on services.imagico.de if you are interested in a license or print.
February 12, 2018
I produced a new cloud free satellite image mosaic of Crete in Winter:
The image is based on Landsat data from 2016 to 2018.
You can find it on services.imagico.de if you are interested in a license or print.
February 11, 2018
As many of you know for a few years now we have a new trend in remote sensing and cartography that is called Artificial Intelligence or Machine Learning. Like many similar hypes what is communicated about this technology is little based on hard facts and largely dominated by inflated marketing promises and wishful thinking. I here want to provide a bit of context to this which is often missing in discussion on the matter and which is important to understand when you consider the usefulness of such methods for cartographic purposes.
AI or Machine Learning technologies are nothing new, when i was at University these were already pretty established in information sciences. The name has been misleading from the beginning though since Intelligence and Learning implies an analogy to human intelligence and learning which does not really exist.
A good analogy to illustrate how these algorithms work is that of a young kid being mechanically trained: Imagine a young kid that has grown up with no exposure to a real world environment. This kid has learned basic human interaction and language but no significant experience in the larger world and society beyond this.
Now you start watching TV with that kid and every time there is a dog on screen you call out Oh, a dog and encourage the kid to follow your example. And after some time you let the kid continue on its own as a human dog detector.
This is pretty much what AI or Machine Learning technologies do – except of course that the underlying technological systems are still usually much less suited for this task than the human brain. But that is just a gradual difference and could be overcome with time.
The important thing to realize is that this is not how a human typically performs intellectual work.
To use an example closer to the domain of cartography – imagine the same scenario with the kid above with detecting buildings on satellite images. And now consider the same task being performed by a diligent and capable human, like the typical experienced OpenStreetMap mapper.
The trained kid has never seen a real world building from the outside. It has no mental image associated with the word building called out by its trainer except for what it sees on the satellite images.
Experienced OSM mappers however have an in depth knowledge of what a building is – both in the real world as well as in the abstract classification system of OpenStreetMap. If they see an empty swimming pool on an image they will be able to deduce that this is not a building due to the shadows – even if they have never seen a swimming pool before. This typical qualified human interpretation of an image is based on an in depth understanding of what is visible in the image connecting it to the huge base of real world experience a human typically has. This allows humans to solve specific problems they have never been confronted with specifically before based on knowledge of universal principles like logic and the laws of physics.
As already indicated in the title of this post in a way AI or Machine Learning are the imitation of problem solving in a cargo cult like fashion. Like the kid in the example above who has no understanding of what a dog or a building is beyond the training it receives and tries to imitate afterwards. This is also visible from the kind of funny errors you get from this kind of system – usually funny because they are stupid from the perspective of a human.
Those in decision making positions at companies like Facebook and Mapbox who try to push AI or Machine Learning into cartography (see here and here) are largely aware of these limitations. If they truly believed that AIs can replace human intelligence in mapping they would not try to push such methods into OSM, they would simply build their own geo-database using these methods free of the inconvenient rules and constraints of OSM. The reason why they push this into OSM is because on their own these methods are pretty useless for cartographic purposes. As illustrated above for principal reasons they produce pretty blatant and stupid errors and even if the error rate is low that usually ruins things for most application. What would you think of a map where one percent of the buildings are in the middle of a road or river or similar? Would you trust a self driving car that uses a road database where 0.1 percent of the roads lead into a lake or wall?
What Facebook & Co. hope for is that by pushing AI methods into OSM they can get the OSM community to clean up the errors their trained mechanical kids inevitably produce and thereby turn the practically pretty useless AI results into something of practical value – or, to put it more bluntly, to change OSM from being a map by the people for the people into a project of crowd sourced slave work for the corporate AI overlords.
If you follow my blog you know i am not at all opposed to automated data processing in cartography. I usually prefer analytical methods to AI based algorithms though because they produce better results in case of the problems i am dealing with. But one of the principles i try to follow strictly in that regard is never to base a process on manually post processing machine generated data. The big advantage of using fully automated methods is that you can scale them very well. But you immediately loose this advantages if you start introducing manual post processing because this does not scale in the same way. If you ignore this because crowd sourced work from the OSM community comes for free that indicates a pretty problematic and arrogant attitude towards this community. Computers should perform work for humans, not the other way round.
If you are into AI/machine learning and want OSM to profit from it there are a number of ways you can work towards this in a constructive way:
In other words: You should do exactly the opposite of what Facebook and Mapbox are doing in this field.
I want to close this post with a short remark regarding the question if we will in the future get to have machines that can perform intelligent work significantly beyond the level of a trained kid? The answer is: We already have that in the form of computer programs programmed to solve specific tasks. The superficial attractiveness of AI or Machine Learning comes from the promise that it can help you solve problems you might not understand well enough to be able to specifically program a computer to solve them. I don’t consider this something that is likely to happen in the foreseeable future because that would not just mean reproducing the complex life long learning process of an individual human being but also the millennia of cultural and technological evolution of the human society as a whole.
What is well possible though is that for everyday tasks we will in the future increasingly rely on this kind of imitated problem solving through AIs and this way loose the ability to analyze and solve these problems ourselves based on a deeper understanding in the way described above. If that happens we would obviously also loose the ability to recognize the difference between a superficial imitated solution and a real in depth solution of the problem. In the end then a building will simply be defined as that which the KI recognizes as a building.
January 27, 2018
Over the last days i added a number of images to the OSM images for mapping produced from Sentinel-2 and Landsat data.
There are three new images for the Antarctic:
This image covers the McMurdo Sound, McMurdo Dry Valleys and Ross island. Data is from February 2017 – end of summer, but with quite a bit of seasonal sea ice cover still present.
There is a lot that can be mapped from this image in terms of topography, glaciers and other things. It can also be used to properly locate features where you only have an approximate position from other data sources. If you compare the image with existing data in OSM you will also see that there is significant mismatch in many cases. Positional accuracy of the image – like the other Antarctic images – is good but not great. In mountainous areas at the edge of the image swath (here: in the northwest) errors can be more than 50m probably on occasion but otherwise will usually be less.
Another part of the East Antarctic coast. This requires a bit of experience to distinguish between permanent and non-permanent ice. But existing mapping in the area is poor so there is a lot of room for improvement.
This is an image for updating the ice shelf edge after the iceberg calving in 2017. The current mapping in OSM here is very crude since based on low resolution images.
And then there is another image which is more of an experiment. This is an autumn image from the western Alps that shows autumn colors and could be helpful for leaf_cycle mapping. I am not quite sure how well this works. You probably need some level of local knowledge to be able to interpret the colors correctly. The red-brown colored forested areas are usually deciduous broadleaved forest, in many cases beeches. Larches are more yellow in color and are often mixed with other types of trees which makes them more difficult to identify. Also the different types of trees change their colors at different times – also depending on the altitude – so a single image does not really cover everything and a solid local knowledge is probably important not to misinterpret the colors.
I would be interested in feedback about to what extent this image is useful for mapping leaf_cycle.
January 21, 2018
Many on my readers probably have heard about the company Mapzen closing down. In that context the Mapzen CEO Randy Meech has published (or more precisely: re-published) a piece on volatileness and permanence in tech business which reminded me of a subject i had intended to write about here for some time.
When i started publishing 3d geovisualizations more than ten years ago these were unique both technically and design wise. By my standards today these early works were severely limited in various ways – both due to my lack of knowledge and experience on the matter and due to the limits in quality of available data and the severe limitations of computer hardware at that time. But at the same time they were in many ways miles ahead of what everyone else was producing in this field (and in some ways still are).
Today, more that ten years after these early works, a lot has changed in both the quality of the results and in the underlying technology. But there are also elements that stayed almost the same, in particular the use of POV-Ray as the rendering engine.
Randy in his text contemplated about the oldest companies of the world and if you’d assemble a list of the oldest end user computer programs still in use POV-Ray would be pretty far up with its roots going back to 1987. Not as old as TeX but still quite remarkable.
What makes programs like TeX or POV-Ray prevail in a world where in both cases there has been – in parallel or subsequently – a multi-billion dollar industry established in a very different direction but in a way competing for the same tasks (typesetting text and producing 3d renderings respectively)?
The answer is that they are based on ideas that are timeless and radical in some way and they are none the less specifically developed for production use.
In case of POV-Ray the timeless, radical idea was backwards raytracing in purity. There were dozens of projects following that idea mostly in the 1990s in the field of computer science research but none of them was actually seriously developed for production use. There were also dozens of both open source and proprietary rendering engines being developed for production use making use of backwards rendering techniques but all of them diluted the pure backwards rendering idea because of the attractiveness of scanline rendering centered hardware accelerated 3d as it during that time dominated the commercially important gaming and movie industries.
Because POV-Ray was the only pure backwards renderer it was also the only renderer that could do direct rendering of implicit surfaces. Ryoichi Suzuki, who implemented this by the way indicated back in 2001 that this was based on an idea originally implemented 15 years ago which makes this over 30 years old now. The POV-Ray isosurface implementation is the basis of all my 3d Earth visualizations.
In the grand scheme of overall cultural and technological development ten years or 30 years are nothing of course. Eventually POV-Ray and my 3d map design work are almost certainly destined for oblivion. And maybe also the underlying timeless, radical ideas are not as timeless as i indicated. But what you can say with certainty is that the short term commercial success is no indicator for long term viability and significance of an idea for the advancement of society.
Going more specifically into cartography and map design technology – which most of my readers are probably more familiar with – companies like Mapbox/Google/Here/Esri etc. are focused on short term solutions for their momentanous business needs – just like most businesses looking into 3d rendering in the 1990s found in scanline rendering techniques and its implementation in specialized hardware a convenient and profitable way to do the low quality 3d we all know from this era’s computer games and movies.
Hardly anyone, at least no one in a position of power, at a company like Google or Mapbox has the long term vision of a Donald Knuth or an Eduard Imhof. This is not only because they cannot attract such people to work for them but primarily because that would be extremely dangerous for the short term business success.
Mapzen has always presented itself as if it was less oriented for short term business goals than other companies and maybe it was and this contributed to its demise. But at the same time they did not have the timeless and radical ideas and the energy and vision to pursue them to create something like TeX or POV-Ray that could define them and give them a long term advantage over the big players like Google or Mapbox. What they produced were overwhelmingly products following the same short term trends as the other players do in a lemming-like fashion. Not without specific innovative ideas for sure but nothing radical that would actually make it stand out.
Mapzen published a lot of their work as open source software and this way tries to make sure it lives on after the company closes. This is no guarantee however. There are tons on open source programs dozing away in the expanses of the net no one looks at or uses any more.
While open sourcing development work is commendable and important for innovation and progress – TeX and POV-Ray as individual programs would have never lasted this long if they had not been open source – it is important to notice that the deciding factor ultimately is if there is actually
January 1, 2018
December 10, 2017
When i wrote about rendering of footways/cycleways in OpenStreetMap based maps recently i indicated there are other changes i made in the alternative-colors style that deserve some more detailed explanation and here i am going to introduce some of them related to waterbody rendering.
Waterbodies in the standard style (and similarlys in nearly all other OSM based maps) have always been rendered in a relatively simple, not to say crude way. Every water related feature is drawn in the same color, water areas traditionally starting at z6, river lines at z8 and streams and smaller artificial waterways at z13. The z8 and z13 thresholds are so firmly established that mappers often decide how to tag waterways specifically to accommodate these thresholds. Since the smaller artificial waterways (ditch and drain) are rendered slightly thinner than streams these tags are frequently abused to map smaller natural waterways. The only significant styling specialty in this traditional framework is that the small waterways starting at z13 get a bright casing so they are better visible on dark backgrounds.
Some time ago a change was introduced to render intermittent waterways with a dashed line. While this seems like a logical styling decision it turned out to work rather badly because of the problems of dashed line styles in combination with detailed geometries as i already explained in context of the footway rendering.
This is the situation that forms the basis of the changes i am going to write about here.
As indicated above traditionally the OSM standard style renders all water features in the same color. This color was changed some time ago but it is still one single color that is used for everything – from the ocean to the smallest streams and ditches.
This all one color scheme does not require mappers to think about how they map waterbodies specifically, they can just paint the map blue so to speak. In particular with water area tagging this has lead to a lot of arbitrariness and relatively low data quality in the more detailed, more specific information. As i pointed out in the context of waterbody data use the data cannot really be used for much else than for painting waterbodies in a uniform color. At the same time this makes life very easy for map designers of these relatively simple maps since you don’t have to worry about drawing order or other difficulties.
More specific information about waterbodies would however be very useful for data users so it makes sense to render it to encourage mappers to be more diligent with recording such information. And differentiating different types of waterbodies can help a lot creating a better readable map since what color and styling works best varies depending on the type of waterbody. And since blue color is widely reserved for water related features anyway differentiating by color is well possible.
The basic three types of waterbodies i am differentiating are:
Rivers use the strongest and darkest color so they are well visible even on strong and structured background while the ocean uses a brighter color not to be too dominating over land colors given that it covers a large area.
In addition to differentiating by physical type of waterbody for line features i also distinguish between natural and artificial waterways in a relatively subtle form using a slightly brighter blue centerline at the higher zoom levels.
Use of subtlety is of fundamental importance if you want to create a rich map that it still well readable. This distinction between natural and artificial waterways is strong enough to be clearly recognized by the keen observer but at the same time it is not adding a lot of noise that would affect the readability of the map otherwise.
As indicated above the standard style already differentiates intermittent waterways but not in a very good way. I tested various options and ultimately came up with the following approach
In addition for waterbodies with salt water (salt=yes) the ocean color is used in combination with a weak bright grain pattern. An abstract demo of all of these together here:
In addition to the more fundamental changes described above i also did a lot of tuning for the line widths and other rendering parameters for a more balanced relationship between the different feature types and a more continuous change in appearance when zooming in or out.
Not directly connected to the waterbody changes but still somewhat related – i added rendering of fords. These are shown in the standard style as POIs with an icon starting at z16 which is a fairly unfortunate way of rendering them because:
In other words: This kind of rendering in many situations does not really improve the map.
I used a different approach by rendering fords similar to bridges – after all a ford is a highway crossing a waterway without a bridge. The difficulty is that fords can be tagged on a node while bridges are by convention always mapped as ways. Rendering node based fords similar to bridges requires quite a bit of effort and i am afraid this significantly adds to the already complex road code. But i think the visual results make it worth it.
As you can see this is usually intuitively recognizable as a ford and the crossing geometry is not obscured by a big and distracting icon.
December 1, 2017
The OpenStreetMap Foundation tomorrow is going to open board elections for this year’s Annual General Meeting for two seats on the OSMF board. If you are a member of the OSMF i would strongly urge you to vote. If not you might want to consider becoming a member (which however will not allow you to participate in these elections – for that you have to be a member a month before the elections).
The reason why this is of particular importance this year is because this year’s candidates for the positions on the board offer in parts fairly contrasting positions on the direction on the OSMF and the OpenStreetMap project in general. You can get an idea of the ideas and views of the candidates in the Q&A on the OSM wiki but you also need to read between the lines because candidates have partly picked up the bad habit from big politics of talking much without saying anything of substance. Sometimes the way how the candidates deal with questions they do not like is more revealing than the actual answers.
Of course replacing two of seven board members will not immediately change the whole OSMF but due to the quite contrasting views and backgrounds of the candidates it will be a significant message in terms of what direction the members support and this way will probably weigh significantly also on the other board members.
Of course even a fundamental change in direction of the OSMF would not necessarily have much influence on the OpenStreetMap project as a whole. One of the most remarkable aspects of OpenStreetMap is how little it depends on central organization and management. But of course if the OSMF and the OSM community start diverging significantly in goals and direction this could create a lot of friction.
November 18, 2017
Here another view of the dust cloud from the previous post from a few days later as seen by Sentinel-2:
And another remarkable situation from the other side of Earth – an impressive assembly of big tabular icebergs near Elephant Island northeast of the Antarctic Peninsula – as seen by Sentinel-3.
November 15, 2017
I have a somewhat different satellite image than usual here:
This is a strip of nine Landsat scenes recording parts of Alaska in early Winter from a few days back. I rotated this to align roughly with the satellite recording direction and you need to scroll down to see the whole image. As you scoll down you move from the limit of the polar night at the northern end towards the southwest and towards the sun across about 1500km.
You will notice a slight bend in the image when doing that – this is because the image coordinate system is not actually aligned to the satellite orbit but a simple oblique mercator projection. Due to the satellite’s sun synchroneous orbit however the satellite ground path is not actually a great circle but kind of spirals around the earth following the sun.
The southern end of this image strip is defined by the end of the Landsat recording which does not extend over the open ocean (it is Landsat after all). The northern end is the limit of normal Landsat recordings at this time of year due to the low sun position.
Here a Sentinel-3 OLCI image from the same day (and this time with north up orientation – also allowing you to identify where exactly the first image is placed) showing a much tighter northern limit.
And for comparison here a false color infrared Sentinel-3 SLSTR image where no recording limit is imposed showing the actual limit of light – but of course not in natural colors.
The two Sentinel-3 images also show an impressive cloud of dust extending SSW from the delta of the Copper River in southern Alaska at the right side of the image. Here a larger view to show this better.
And finally two crops from the first image – the first one from the north showing how you can watch the rivers freezing over at this time of the year near Fort Yukon.
and the second from the south showing the indeed very windy yet sunny weather at Tugidak Island in the south.
November 10, 2017
Some news that might be of interest for some of my readers – without any attempt of completeness.
I updated my satellite sensor chart accordingly. Note i still could not get myself to specify a full coverage interval for the PlantScope satellites. They now show a decent monthly coverage of >90 percent between -60 and 75 degrees latitude for the combination of RapidEye and PlantScope but full coverage means full coverage for me. And demo or it did not happen.
November 1, 2017
About a year ago i wrote my report on the first year acquisitions of Sentinel-2 as well as for Landsat on a matching time frame. This was – and still is to my knowledge – the most detailed and accurate analysis of image data available from these satellites. Here is an update of this for a time frame from October 2016 to October 2017.
The October division is meant to include exactly one summer season of both the northern and the southern hemisphere. A calender year based division would always split the southern hemisphere summer season.
Here is the plot for the overall recording volume of all satellites:
Both Landsat satellites have operated during the last year without any notable incidents or interruptions of recordings. Landsat 7 had its last orbit maintainance maneuver in early 2017 and is now in a steadily declining orbit which means the recording time frame will move from the current about 10:15 to earlier times as it has happened for EO-1 previously.
Here are the coverage maps for Landsat 8 day time acquisitions:
The most notable difference to previous years is that Antarctic coverage was significantly reduced during the 2016-2017 summer (see the last year for comparison). You can see that in the line plot on top as a dip in the Landsat 8 line near the end of 2016 which differs significantly from the patterns of the previous years. To my knowledge there has so far not been a statement from the USGS as to why this change was made.
Otherwise not much has changed – we now get routine off-nadir acquisitions for northern Greenland and the Antarctic interior. In Greenland these always happen for the same path which means there is room for improvement by selecting the path dynamically based on weather in the target area. All 2017 northern Greenland off-nadir images are severely affected by clouds.
Also we still have the
two one gap in land area coverage at lower latitudes – Rockall and Iony Island (Edit: noticed there is actually one image for Rockall – though not regular coverage. Iony Island is actually the more meaningful omission)
For Sentinel-2A we are looking at the second year of operations and this might lead to expectations of an increased level of routine and therefore reliability. We also get the first images from Sentinel-2B. Here are the numbers for Sentinel-2A and Sentinel-2B separately:
And here the combined numbers with a different color scale.
I should emphasize that these are the images publicly available. As pointed out already in a previous report there are significant differences between the published acquisition plans and the actual recordings and furthermore publication of images is frequently incomplete. Here an example from Sentinel-2B from my detailed statistics page (which i also updated to the current state).
I have not determined precise numbers but it is clear that the volume of both images planned but not recorded and recorded but not published is significant. Especially the latter, in particular in its arbitrariness shown in the image above, seems quite embarrassing.
The acquisition patterns are nearly the same as last year and also the same for Sentinel-2A and Sentinel-2B apparently. To summarize: Most of Europe and Africa as well as Greenland are recorded at every opportunity – which means a ten days intervals for each satellite, the rest of the larger land masses except Antarctica only at every second opportunity except for some seemingly arbitrary small special interest areas where also a ten days interval is recorded. Smaller islands are fully missing. Antarctica has been covered during the 2016-2017 summer but mostly at a much lower frequency than the rest of Earth.
Apart from the spatial distribution of acquisitions (which quite clearly is a conscious political choice) the most striking difference to Landsat is that high latitude acquisitions in Greenland and European Arctic islands are not reduced due to the naturally larger overlap between recording opportunities. In northern Greenland this leads during summer to frequently more than one image per day. While this can be nice for data users interested in those areas and is also kind of compensatory for the otherwise low focus on these regions it is fairly wasteful in terms of recording resources and probably results from blindly sticking to the rule record Europe and Greenland at every opportunity decided on by bureaucrats who have no clue what this actually means in practice.
So overall not that much has changed since last year – which i guess is good news for Landsat and less good news for Sentinel-2 since the latter is still subject to the same problems and limitations as last year. But maybe we just need a few more years to get used to these problems…
Apart from the problems already mentioned Sentinel-2 operations continue to be plagued by delays in data processing and other incidents. While for Landsat you can fairly reliably predict when the next image will be recorded for a certain place on earth and that it will be available a few hours afterwards for Sentinel-2 this is still much less the case.
With all the beating on Sentinel-2 problems it should however be mentioned that with two satellites now operating at a more or less constant level Sentinel-2 now usually offers a higher recording frequency than Landsat 8 (which is a practically sensible comparison since use of data from Landsat 7 is often fairly difficult due to the SLC gaps) – even in the lower priority areas – except for the small islands and Antarctica of course. In other words: if you look for the most recent image from a certain point on Earth it is more likely you find it in the Sentinel-2 archive than from Landsat 8 – despite the fact that delays in processing, missing recordings and missing publications put Sentinel-2 at a significant disadvantage.
And another positive thing about Sentinel-2 – Availability of the download infrastructure has improved a lot in the past months. Longer unscheduled downtimes where no downloads are possible at all are now fairly rare.
Here for reference all the recording visualizations for this and the previous years:
|year||day||night||day pixel coverage|
|2016||LS8, LS7||LS8||LS8, S2A|
|2017||LS8, LS7||LS8||LS8, S2A, S2B, S2 (both)|
October 29, 2017
A few satellite image impressions from the last weeks showing islands in spring and autumn. First a view of southwest Iceland from just a few days ago:
Then a clear weather glimpse of South Georgia in spring – with a large iceberg to the northeast:
And finally an image of Onekotan Island in the northern Kuril Islands:
The first two are based on Copernicus Sentinel-2 data, the last is created from Landsat imagery.