Imagico.de

blog

Landsat mosaic based on pixel statistics

April 25, 2018
by chris
0 comments

Arctic mosaic and colors revisited

Continuing from the previous post here as promised more examples for application of pixel statistics methods to Landsat and Sentinel-2 data and how this can help to produce more accurate colors.

I have mentioned before that based on the spectral characteristics for accurate natural colors Landsat offers a significantly better basis than current low resolution systems like MODIS and Sentinel-3 and that from Landsat 7 and EO-1 via Landsat 8 to Sentinel-2 there is a notable trend to less suitability for accurate color reproduction.

Visible light spectral bands of common open data earth observation satellite sensors

This assessment which is based on the spectral characteristics is something that is hard to demonstrate practically based on individual images because the differences in viewing conditions are usually large compared to the differences in colors due to different spectral characteristics.

I have now produced a larger area image mosaic based on pixel statistics methods (which i discussed in the previous post) and by comparing with the MODIS based Green Marble mosaic i can point out the effects of different spectral characteristics much better. The image uses data from Landsat 7, Landsat 8 and Sentinel-2 for the land surfaces and the Green Marble as background for water areas. The data basis is not that broad so there are also significant color differences due to incomplete convergence. But you can still pretty prominently see the color differences compared to the MODIS mosaic.

Arctic mosaic based on Landsat and Sentinel-2 data

Green Marble for comparison

The most striking difference is that the MODIS based Green Marble rarely features true gray tones. Most areas that are gray in the Landsat/Sentinel-2 mosaic show up in red and brown colors in the Green Marble. This is a result of the more narrow spectral bands in the MODIS instrument, in particular the green band. Gray colors mean reflection is more or less equal in all three spectral bands of the human eye but this does not necessarily mean it is completely uniform across the visible range. If it is not a narrow spectral band will usually result in a non-neutral color being registered for a surface that would appear to be of neutral color in direct view by the human eye. The opposite is possible as well but practically much less likely.

Moscow based on Landsat pixel statistics

Moscow based on Green Marble

Verkhoyansk Mountains based on Landsat pixel statistics

Verkhoyansk Mountains based on Green Marble

This Arctic mosaic shown by the way is to my knowledge the first complete natural color mosaic of the Arctic in a better-than-MODIS resolution. I don’t want to specify an actual resolution because of the limitations of pixel statistics method described in the previous post. It was processed in a 30m grid (based on the multispectral Landsat resolution). Obviously my regional mosaics like those of Greenland and Scandinavia offer a significantly better resolution but are also more costly to produce.

Greenland sample form the Artic mosaic based on Landsat and Sentinel-2 data

Same area based on the Landsat mosaic of Greenland

Same area based on Green Marble

I also have full coverage of Europe but so far not beyond. If you are interested in other areas let me know.

Europe mosaic based on Landsat and Sentinel-2 data

pixel statistics based on Landsat - Pyrenees

April 21, 2018
by chris
0 comments

Satellite image pixel statistics

As regular readers of this blog know i have produced quite a few satellite image mosaic products over the past years and reviewed the work of others in that field as well. New products of this type have been introduced quite frequently by various companies over the past 2-3 years but amazingly it seems quality of those has largely plateaued – most of the stuff being produced these days i don’t really find innovative enough to warrant a closer look.

And this is despite the fact that we have a lot of factors that could be considered to provide good conditions for quality improvements:

  • newer satellites are producing higher quality data.
  • we have a quickly growing body of image data that can be used as data source for image mosaic production.
  • computers to process the data are getting more powerful and cheaper.

The question i want to shed some light on is why despite these promising circumstances we don’t see a widespread improvement in quality of end use satellite image mosaic products being available in services.

There are economic factors involved here of course but one of the most important reasons is technological. Focus in satellite image mosaic production over the past 5-10 years has almost exclusively been on what i call pixel statistics techniques. The Mapbox cloudless atlas has been the iconic example for this. It was not the first – the Blue Marble Next Generation from 2005 was technically also based on pixel statistics and techniques like this date back at least to early AVHRR data products in the 1980s. But the Mapbox mosaic was the first such product commercially produced on a global level for direct visual application.

Pixel statistics techniques means that for every pixel in your processing grid you take all the source image data you have from different images taken at different times and you run some statistics over it to estimate an ideal, representative pixel value to be used in the image mosaic to be created.

The statistics used can be very simple – which can under the right circumstances still lead to pretty good results, this is what the Mapbox mosaic nicely demonstrated. But they can also be more complex and even take into account other secondary data sources not derived from the image data. My Green Marble mosaic demonstrates this quite well. The main point all these techniques have in common is that every pixel is treated independently. This makes it a very attractive approach because it is (a) relatively simple to formulate an algorithm for this and (b) it is very efficient to run on a large scale. These advantages have led to almost everyone who wanted to do something in the field of satellite image mosaic production in recent years to choose a pixel statistics based approach.

What people apparently often have not realized when making this decision is that this strategy does not work equally well at different spatial resolutions. Satellite image mosaicing is not a scale independent problem. I hinted this already back when reviewing the Google Landsat mosaic.

pixel statistics at 250m resolution – the Green Marble

The thing is at a pixel size of 250m or more you can reasonably treat every pixel of the image independently. As long as you have a sufficiently broad data basis for your statistics to converge leading to low uncorrelated noise levels and no significant banding artefacts and your statistical method is well chosen you can achieve well readable results. But if you move to significantly higher resolutions this does not work any more because our ability to read and understand a higher resolution image of the earth surface depends increasingly on fairly delicate spatial relationships within the image. And this is frequently lost if you treat every pixel independently.


15m resolution Landsat images

Many pixel statistics based mosaics you can find made from Landsat or Sentinel-2 data do not actually get to the point where you can see that because the mentioned requirements are not met (broad enough data basis and suitable statistical approach) but when they do the results usually still appear confusing and are way behind an individual high quality image in terms of acuity and readability.

In short: Pixel statistics are a highly attractive method for image mosaic production that can work very nicely and efficiently at coarse resolutions. But they are practically unsuitable for much higher resolutions. I have never seen anyone attempting to apply pixel statistics methods to very high resolution images with pixel sizes in the sub meter range and it would probably look pretty horrible (though it might nicely demonstrate the point i am trying to make here). In the intermediate resolution range you see an effect of diminishing returns, i.e. when all other things are equal you would see that increasing source image and processing grid resolution would at some point not significantly improve the quality of the resulting image any more.


Sub-meter resolution images (from IGN Spain)

Practically this effect overlaps with other influences like the limitations in positional accuracy of the source images and the typically lower number of images available at higher resolutions so it can be practically difficult to separate different effects. But while the latter problems can be overcome with technological improvements the main problem is a principal one that will always pose a hard limit to methods based on pixel statistics.

And ultimately this is why quality of satellite image mosaic products has not much improved in the last years.

Because of these limitations i have – for higher resolution mosaics in the Landsat/Sentinel-2 resolution range (10-15m) – always concentrated on methods not based on pixel statistics. But pixel statistics have their charm – economically but also because you can produce very accurate colors. Colors in an individual satellite image are always subject to the specific conditions under which the image was taken. You can put a lot of effort into counteracting that with atmosphere and BRDF compensation but such methods also inevitably introduce variance due to inaccurate simplifying assumptions being made. With a sufficiently broad data basis pixel statistics can help reducing this variance and give you more precise colors.

pixel statistics based on Landsat images

This is why i looked into pixel statistics based methods for Landsat and Sentinel-2 images recently. Not so much because of the higher resolution (which is nice to have but as explained comes with diminishing returns). More because what you can achieve in terms of colors.

More on that in the next post. Until then here a quick teaser:

April 21, 2018
by chris
0 comments

It is called progress

Came across this remarkable map comparison today – click on the image to get to the source:

Note i re-touched the image to make it look a bit more like it is meant to look – see the description of the image after following the link. This however is not completely accurate – you need to ignore the different label languages.

I find this a fairly educative and thought provoking example on several levels. You have the general concept of the map (static vs. dynamic in user interaction), you have the underlying technology to produce the map and you have the map design. And above all of this you have the purpose of the map (being a locator map in a Wikipedia article). One obvious thing you for example could ask yourself is why the map on the left looks like it does and why does the map on the right look like it does – in other words: what are the reasons and motives for the design used here? Since both maps are produced for the same purpose you will probably agree it is somewhat odd they differ that much. Does this have reasons in the static vs. dynamic interaction? Does it have reasons in technology? Is it a matter of changing map design fashion? Or is it something entirely different?

Note i wrote about the economic side of this matter, incidentally also in the context of Wikimedia maps, several years ago. I also wrote about the sociological side of map design in context of OpenStreetMap-Carto more recently. But i still find this a rather intriguing topic with many open questions. If you have additional thoughts and perspectives on this matter i would be curious to read about it in the comments.

Northeastern Alps by Landsat 1 in 1972

April 20, 2018
by chris
0 comments

Happy Anniversary Landsat Open Data

Today ten years ago the USGS started opening the full Landsat image archive as open data. Although this was not the first release of satellite imagery as open data – a selection of Landsat images was already opened before and MODIS data was likewise already usable by everyone years before it can today still be without reservations called the historic decision that most strongly shaped the satellite image landscape of today.

Here one of the early Landsat images from the Landsat 1 Multispectral Scanner System from 1972. Since MSS did not record a blue spectral band this uses a synthetic blue channel estimated from the other spectral bands (i wrote about this in the context of ASTER images before).


April 11, 2018
by chris
0 comments

Codification of contact

This blog post discusses the idea of Codes of Conduct, these are documents regulating social interaction, in the OpenStreetMap project. I in particular want to focus on Codes of Conduct for non-virtual meetings, i.e. for events where people meet in person.

First a bit of background: Social interaction in a society is normally regulated by two different sets of rules

  • The social conventions – the non-codified standards of social interaction of a society, largely defined by tradition (we do things certain ways because our parents and grandparents have already done so) and often fairly specific in the details to the social class, subculture or even family.
  • The local justice system.

Normally following these two sets of rules in everyday life is something we can manage without much effort. But as indicated these are local rules. With cross cultural and international social interaction things become much more difficult. You are very likely to break social conventions in international social interaction and depend on tolerance and generosity of others in such cases. Because of this cross cultural international social interaction is usually characterized by careful and considerate actions and reactions in the attempt to find a common ground in terms of common social conventions. A significant part of this is also successfully managing failure of a working social interaction – the organized and respectful retreat from a failed attempt at such.

These mechanisms of cross cultural social interaction have developed over thousands of years. The world’s travel literature is full of stories and anecdotes about positive examples of such with eye level interaction and cultural exchange and negative examples with catastrophic failures sometimes leading to violent results as well as many cases of arrogance and narrow-mindedness leading to a peaceful but completely unbalanced interaction. If you know people well who have significant experience with eye level cross cultural interaction you can usually observe a distinct change in habitus and body language when they meet a person and realize a significant difference in social conventions. In cultures where balanced, peaceful interaction with other cultures is common (through trade and travel for example) these mechanisms often have found a place in the culture’s social conventions in form of certain rituals and procedures.

In my experience how well people can handle this also depends a lot on past experience in dealing with people following very different social conventions. For example people who have grown up in a rural area are often better at this because they tend to get exposed early in their life to the significantly different and contrasting social conventions of life in towns and cities while people growing up in a city – while they might routinely experience a larger variety in social conventions in their immediate environment (though usually through the anonymity of city life) they often never experience a similarly harsh contrast in those conventions until they have grown up.

And while today in a way we have more frequent cross cultural interaction than at any time in history due to real time international communication and relatively inexpensive travel opportunities most of this interaction tends to be highly asymmetrical and truly balanced eye level cross cultural social interaction has probably – relatively speaking – become a rare exception.

The corporate code of conduct

Codes of conduct were invented as an additional set of rules of social interaction in the context of corporations regulating interaction of corporations with their employees and among employees, being created by the corporate management and being contractually agreed upon by people.

Reasons for creating those are in particular

  • to avoid the sometimes unreliable nature of uncodified social conventions.
  • adjusting the social conventions of the ordinary employees (who might come from a significantly different social and cultural background) to those of the management for their convenience.
  • limiting some of the freedoms offered by the justice system and social conventions because they are not considered good for productivity.
  • avoiding conflicts due to differences in laws and social conventions of employees by imposing a uniform set of rules above them. This is in particular important for international corporations.

Practically a corporate code of conduct is typically meant to supersede social conventions and local laws. It can not normally contradict local laws (though there are quite a few cases where internal corporate rules are actually in conflict with legal requirements) but since code of conduct rules are usually more restrictive than general laws they practically form the relevant limits of accepted behavior.

If we now have the idea of creating a code of conduct for an international OpenStreetMap meeting – like the SOTM conference – we could have two potential goals with that based on what i explained above:

  • helping and supporting to engage in eye level cross cultural social interaction in the way i described above (i.e. careful and considerate interaction to try establishing a common ground in social conventions).
  • managing the event like a corporate event under a corporate code of conduct.

Now the SOTM CoC actually does neither of these. It does not provide any significant guidance how to perform cross cultural social interaction and it also lacks the clarity of rules and the goal orientation of a corporate code of conduct. Instead it comes closer to a third type which i would call a political code of conduct.

The political code of conduct

The political code of conduct is the result of the idea of a corporate code of conduct being adapted by and for social reform and social justice movements and organizations. The idea here is to – just like with a corporate code of conduct – essentially replace existing social conventions and laws (because they are considered injust) with a set of rules – in this case not designed to optimize productivity but to achieve certain political goals.

The political goals are not immediately obvious in the SOTM CoC since it has been toned down compared to the document it has been derived from.

Now i don’t want to judge the political ideas behind this but no matter what you think of them it should be clear that the resulting rules will primarily have the goal of implementing the political ideas (just like the corporate CoC wants to increase productivity). Most political CoCs are created in a culturally fairly homogeneous environment (like an organization of people with common social background and political goals). While there are occasionally translations of such documents in different languages i have never seen a CoC that has been designed with multilingual input and discussion.

All of this is highly problematic in the way that it does not allow people to freely seek and find an individual common ground in social conventions between them but imposes a certain set of social conventions from the top. No matter what the political motives for creating such rules are they always come from a specific cultural background and are imposed on the rest of a global and culturally diverse community in an act of cultural dominance.

What remains to be discussed is how a code of conduct could look like that is meant to help and support people to engage in cross cultural social interaction on eye level in the traditional way without a culturally biased rule set being imposed.

A culturally neutral code of conduct

Here an attempt at this. Since this is formulated in a certain language you can of course argue that it is not neutral anyway but i put quite a lot of effort into this not relying on a specific interpretation of language and the meaning of certain words but being based on general thoughts and ideas that just happen to be communicated here in a certain language.

Some might also think calling this a code of conduct is incorrect because it is so very different from most documents you see titled this way. I would use a quote from the CoC of the Chaos Communication Congress which pretty well describes the idea behind this draft as well:

This is not a CoC in the anglo-american sense of the word. It appeals to morality rather than trying to instill it.

The event you are participating in is visited by a large variety of people with very different cultural and social backgrounds as well as personal views, ideas and abilities. Experiencing this, getting to know such a large variety of very different people can be a very educative and enjoyable experience but also requires tolerance, curiosity and open-mindedness from the participants. If you are able and willing to bring this with you, you are very welcome to participate in the event. This document is meant to help you do that in a way that makes it a positive experience for all participants.

As guests and visitors of the event you are expected to conform with the local laws. You are also encouraged to familiarize yourself with the local customs and social conventions before and during the visit. This will help you during as well as outside the event.

When interacting with others at the event you need to expect and accept that other guests and visitors might have views, ideas and expectations very different to those you are familiar with. You are expected to be open-minded and tolerant towards such differences. We encourage you to reach out to, communicate and interact with others but when doing that you should be sensitive to them and to the possibility that your behaviour might make them uncomfortable.

We expect you to always treat others at the event with at least the same level of respect, tolerance and generosity as you expect and depend on others to extend to you. To accomplish this you should try to always put the goal of a friendly and open-minded interaction and the comfort of others with this interaction above your specific goals in it – like for example an argument or a discussion you are having. As a participant of the event you are required to be willing to adjust your behaviour in the interest of others and at the same time should as much as you can avoid requiring others to adjust their behaviour to you.

The above rules and suggestions should avoid misunderstandings and conflicts and help resolve smaller issues amicably in most situations. In case of more difficult conflicts when interacting with others you are encouraged to approach other participants of the event to mediate the conflict. If others approach you to help with conflicts try to mediate by attempting to help people find a common ground without actually engaging in the conflict yourself. If you are unable to do so or in cases of more serious conflicts you should approach the organizers of the event. Our aim in such a situation will be to help the parties and if necessary give specific instructions to them which you are required to follow. Such intervention will always try as much as possible to stay neutral and not take sides in the conflict.

If you are a proponent of corporate or political CoCs you most certainly will not like this because it follows a very different approach to the problem of cross cultural social interaction. In my opinion this approach is the only way to organize cross cultural interaction in a non-judgemental way that allows all people of a globally diverse community opportunity to express themselves and have the chance for cross cultural exchange. You can still argue that you do not actually need such a document or you might want to shorten it further compared to the above.

The most common argument of proponents of political CoCs is that the rules are meant to protect the weak from the strong, the marginalized from the dominant and are therefore justifiable. But that is in itself based on putting specific social conventions which lead to the perception of weak and strong, of marginalized and dominant above others and is therefore culturally biased. The language and the words used by the CoCs themselves to set the limits of acceptable behavior already imply the dominance of certain social conventions – which is why my draft above is mostly limited to suggestions meant to help people in their social interaction (i.e. being educative rather than normative) and explaining fundamental ethical principles instead of imposing specific rules that require familiarity with the language and the underlying social conventions to follow them.

Another thing that should be kept in mind is that obviously the local justice system has a special role in the whole thing. This is not that different if you have no or a different kind of CoC obviously. So the question where to have an international meeting is a question where the justice system of the place in question has quite an impact.

How about virtual places?

Now how about CoCs for digital communication channels and platforms? If you have a truly global international channel the same as above applies naturally. But most digital channels are at least language specific and in case of OpenStreetMap often further specific to certain countries or regions. Then you can think about documenting certain common social conventions for those. But you need to keep in mind that you cannot claim the channel or platform in question is open for or be representing the whole global OSM community any more.

TLDR: Engaging in eye level cross cultural social interaction in a careful and considerate way guided by basic and universal moral principles and dominated by tolerance and respect for the other side and a willingness to accept differences in social conventions even if they are inconvenient – like essentially countless generations before us have for thousands of years managed peaceful cross cultural contact in the past – is not the best way to do this, it is just the only way.

Design oriented generalization of open geodata

March 20, 2018
by chris
0 comments

FOSSGIS 2018 and talk announcement

Tomorrow once again the annual FOSSGIS conference starts – this year in Bonn. At the moment it looks like it is going to be pretty damn cold…

I am going to present a talk on Thursday afternoon about design oriented generalization of open geodata.

Here a preview of two sample map renderings i am going to show:

Update: Video of the talk is available on media.ccc.de and on youtube.

Archangelsk in Winter

February 28, 2018
by chris
0 comments

Northern European Winter

Here two Winter impressions from Northern Europe – the first is from Northern Russia showing the city of Archangelsk:

You can see the frozen Northern Dvina River and the likewise mostly frozen White Sea. Well visible in the low sun is also the smoke from the power plants in the area.

Here a magnified crop:

The second image is from northwestern Scotland:

This image does not only feature snow cover in the mountains but also shows a remarkable color combination with the dark green of forested areas contrasting with the brown colors of the non-wooded parts of the hills and mountains.

Both images are based on Sentinel-2 data and can be found in the image catalog on services.imagico.de.

Saunders Island, South Sandwich Islands by Sentinel-2A

February 23, 2018
by chris
0 comments

Satellite image news

Some news on open data satellite images:

I updated the satellite image coverage visualizations. Here is the matching coverage volume plot over time:

Open data satellite image coverage development

There are several important things you can see in that:

  • With Landsat 8 the USGS has for the second southern hemisphere summer season adopted a changed acquisition pattern (indicated by a drop in image volume around December/January) where Antarctic coverage is significantly reduced compared to previous years (see my yearly report for more details)
  • There have been significant fluctuations in the acquisition numbers of the Sentinel-2 satellites. Much of this is related to an inconsistent approach to the Antarctic here as well – with ESA sometimes acquiring Antarctic images with one of the satellites for a few weeks and then dropping it again. A consistent long term plan is not recognizable here.
  • In the last weeks Sentinel-2B acquisitions have been ramped up to full coverage in the nominal 10 day revisit interval (compared to the fairly arbitrary pattern with 10 days for Europe, Africa and Greenland, 20 days for the rest). See the sample of a 10 day coverage below. This is good news.
  • The problems with missing aquisitions and individual tiles are still the same as before as indicated by the orange areas in the visualizations.

Full 10 day coverage by Sentinel-2B in early 2018

Another thing that changed is that ESA seems to have made a smaller change to the Sentinel-2A acquisition pattern including the South Sandwich Islands now. Here an example of a rare nearly cloud free view of Saunders Island:

Saunders Island, South Sandwich Islands by Sentinel-2A

Interestingly this is limited to Sentinel-2A – Sentinel-2B so far has not acquired any South Sandwich Islands images. Like with the Antarctic there does not seem to be a consistent plan behind this which makes this very unrealiable for the data user and kind of another wasted opportunity of establishing Sentinel-2 as a reliable data source.

February 11, 2018
by chris
0 comments

On imitated problem solving

As many of you know for a few years now we have a new trend in remote sensing and cartography that is called Artificial Intelligence or Machine Learning. Like many similar hypes what is communicated about this technology is little based on hard facts and largely dominated by inflated marketing promises and wishful thinking. I here want to provide a bit of context to this which is often missing in discussion on the matter and which is important to understand when you consider the usefulness of such methods for cartographic purposes.

AI or Machine Learning technologies are nothing new, when i was at University these were already pretty established in information sciences. The name has been misleading from the beginning though since Intelligence and Learning implies an analogy to human intelligence and learning which does not really exist.

A good analogy to illustrate how these algorithms work is that of a young kid being mechanically trained: Imagine a young kid that has grown up with no exposure to a real world environment. This kid has learned basic human interaction and language but no significant experience in the larger world and society beyond this.

Now you start watching TV with that kid and every time there is a dog on screen you call out Oh, a dog and encourage the kid to follow your example. And after some time you let the kid continue on its own as a human dog detector.

This is pretty much what AI or Machine Learning technologies do – except of course that the underlying technological systems are still usually much less suited for this task than the human brain. But that is just a gradual difference and could be overcome with time.

The important thing to realize is that this is not how a human typically performs intellectual work.

To use an example closer to the domain of cartography – imagine the same scenario with the kid above with detecting buildings on satellite images. And now consider the same task being performed by a diligent and capable human, like the typical experienced OpenStreetMap mapper.

The trained kid has never seen a real world building from the outside. It has no mental image associated with the word building called out by its trainer except for what it sees on the satellite images.

Experienced OSM mappers however have an in depth knowledge of what a building is – both in the real world as well as in the abstract classification system of OpenStreetMap. If they see an empty swimming pool on an image they will be able to deduce that this is not a building due to the shadows – even if they have never seen a swimming pool before. This typical qualified human interpretation of an image is based on an in depth understanding of what is visible in the image connecting it to the huge base of real world experience a human typically has. This allows humans to solve specific problems they have never been confronted with specifically before based on knowledge of universal principles like logic and the laws of physics.

As already indicated in the title of this post in a way AI or Machine Learning are the imitation of problem solving in a cargo cult like fashion. Like the kid in the example above who has no understanding of what a dog or a building is beyond the training it receives and tries to imitate afterwards. This is also visible from the kind of funny errors you get from this kind of system – usually funny because they are stupid from the perspective of a human.

Those in decision making positions at companies like Facebook and Mapbox who try to push AI or Machine Learning into cartography (see here and here) are largely aware of these limitations. If they truly believed that AIs can replace human intelligence in mapping they would not try to push such methods into OSM, they would simply build their own geo-database using these methods free of the inconvenient rules and constraints of OSM. The reason why they push this into OSM is because on their own these methods are pretty useless for cartographic purposes. As illustrated above for principal reasons they produce pretty blatant and stupid errors and even if the error rate is low that usually ruins things for most application. What would you think of a map where one percent of the buildings are in the middle of a road or river or similar? Would you trust a self driving car that uses a road database where 0.1 percent of the roads lead into a lake or wall?

What Facebook & Co. hope for is that by pushing AI methods into OSM they can get the OSM community to clean up the errors their trained mechanical kids inevitably produce and thereby turn the practically pretty useless AI results into something of practical value – or, to put it more bluntly, to change OSM from being a map by the people for the people into a project of crowd sourced slave work for the corporate AI overlords.

If you follow my blog you know i am not at all opposed to automated data processing in cartography. I usually prefer analytical methods to AI based algorithms though because they produce better results in case of the problems i am dealing with. But one of the principles i try to follow strictly in that regard is never to base a process on manually post processing machine generated data. The big advantage of using fully automated methods is that you can scale them very well. But you immediately loose this advantages if you start introducing manual post processing because this does not scale in the same way. If you ignore this because crowd sourced work from the OSM community comes for free that indicates a pretty problematic and arrogant attitude towards this community. Computers should perform work for humans, not the other way round.

If you are into AI/machine learning and want OSM to profit from it there are a number of ways you can work towards this in a constructive way:

  • make your methods available as open source to the OSM community to use as they see fit.
  • share your experience using these methods by writing documentation and instructions.
  • make data like satellite image available under a license and in a form that is well suited for automated analysis. This is particular means:
    • without lossy compression artefacts
    • with proper radiometric calibration
    • with all spectral bands available
    • with complete metadata
  • develop methods that support mappers on solving practically relevant problems in their work rather than looking for ways to get mappers to fix the shortcomings of the results of your algorithms.

In other words: You should do exactly the opposite of what Facebook and Mapbox are doing in this field.

I want to close this post with a short remark regarding the question if we will in the future get to have machines that can perform intelligent work significantly beyond the level of a trained kid? The answer is: We already have that in the form of computer programs programmed to solve specific tasks. The superficial attractiveness of AI or Machine Learning comes from the promise that it can help you solve problems you might not understand well enough to be able to specifically program a computer to solve them. I don’t consider this something that is likely to happen in the foreseeable future because that would not just mean reproducing the complex life long learning process of an individual human being but also the millennia of cultural and technological evolution of the human society as a whole.

What is well possible though is that for everyday tasks we will in the future increasingly rely on this kind of imitated problem solving through AIs and this way loose the ability to analyze and solve these problems ourselves based on a deeper understanding in the way described above. If that happens we would obviously also loose the ability to recognize the difference between a superficial imitated solution and a real in depth solution of the problem. In the end then a building will simply be defined as that which the KI recognizes as a building.

Western Alps autumn colors 2017

January 27, 2018
by chris
4 Comments

Mapping imagery additions

Over the last days i added a number of images to the OSM images for mapping produced from Sentinel-2 and Landsat data.

There are three new images for the Antarctic:

McMurdo Bay area

This image covers the McMurdo Sound, McMurdo Dry Valleys and Ross island. Data is from February 2017 – end of summer, but with quite a bit of seasonal sea ice cover still present.

There is a lot that can be mapped from this image in terms of topography, glaciers and other things. It can also be used to properly locate features where you only have an approximate position from other data sources. If you compare the image with existing data in OSM you will also see that there is significant mismatch in many cases. Positional accuracy of the image – like the other Antarctic images – is good but not great. In mountainous areas at the edge of the image swath (here: in the northwest) errors can be more than 50m probably on occasion but otherwise will usually be less.

Bunger Hills

Another part of the East Antarctic coast. This requires a bit of experience to distinguish between permanent and non-permanent ice. But existing mapping in the area is poor so there is a lot of room for improvement.

Larsen C ice shelf edge

This is an image for updating the ice shelf edge after the iceberg calving in 2017. The current mapping in OSM here is very crude since based on low resolution images.

Western Alps autumn colors

And then there is another image which is more of an experiment. This is an autumn image from the western Alps that shows autumn colors and could be helpful for leaf_cycle mapping. I am not quite sure how well this works. You probably need some level of local knowledge to be able to interpret the colors correctly. The red-brown colored forested areas are usually deciduous broadleaved forest, in many cases beeches. Larches are more yellow in color and are often mixed with other types of trees which makes them more difficult to identify. Also the different types of trees change their colors at different times – also depending on the altitude – so a single image does not really cover everything and a solid local knowledge is probably important not to misinterpret the colors.

I would be interested in feedback about to what extent this image is useful for mapping leaf_cycle.

sval_3d_980

January 21, 2018
by chris
0 comments

On permanence in IT and cartography

Many on my readers probably have heard about the company Mapzen closing down. In that context the Mapzen CEO Randy Meech has published (or more precisely: re-published) a piece on volatileness and permanence in tech business which reminded me of a subject i had intended to write about here for some time.

When i started publishing 3d geovisualizations more than ten years ago these were unique both technically and design wise. By my standards today these early works were severely limited in various ways – both due to my lack of knowledge and experience on the matter and due to the limits in quality of available data and the severe limitations of computer hardware at that time. But at the same time they were in many ways miles ahead of what everyone else was producing in this field (and in some ways still are).

An early 2006 3d Earth visualization from me

Today, more that ten years after these early works, a lot has changed in both the quality of the results and in the underlying technology. But there are also elements that stayed almost the same, in particular the use of POV-Ray as the rendering engine.

A more recent view produced in 2015

Randy in his text contemplated about the oldest companies of the world and if you’d assemble a list of the oldest end user computer programs still in use POV-Ray would be pretty far up with its roots going back to 1987. Not as old as TeX but still quite remarkable.

What makes programs like TeX or POV-Ray prevail in a world where in both cases there has been – in parallel or subsequently – a multi-billion dollar industry established in a very different direction but in a way competing for the same tasks (typesetting text and producing 3d renderings respectively)?

The answer is that they are based on ideas that are timeless and radical in some way and they are none the less specifically developed for production use.

In case of POV-Ray the timeless, radical idea was backwards raytracing in purity. There were dozens of projects following that idea mostly in the 1990s in the field of computer science research but none of them was actually seriously developed for production use. There were also dozens of both open source and proprietary rendering engines being developed for production use making use of backwards rendering techniques but all of them diluted the pure backwards rendering idea because of the attractiveness of scanline rendering centered hardware accelerated 3d as it during that time dominated the commercially important gaming and movie industries.

Because POV-Ray was the only pure backwards renderer it was also the only renderer that could do direct rendering of implicit surfaces. Ryoichi Suzuki, who implemented this by the way indicated back in 2001 that this was based on an idea originally implemented 15 years ago which makes this over 30 years old now. The POV-Ray isosurface implementation is the basis of all my 3d Earth visualizations.

In the grand scheme of overall cultural and technological development ten years or 30 years are nothing of course. Eventually POV-Ray and my 3d map design work are almost certainly destined for oblivion. And maybe also the underlying timeless, radical ideas are not as timeless as i indicated. But what you can say with certainty is that the short term commercial success is no indicator for long term viability and significance of an idea for the advancement of society.

Going more specifically into cartography and map design technology – which most of my readers are probably more familiar with – companies like Mapbox/Google/Here/Esri etc. are focused on short term solutions for their momentanous business needs – just like most businesses looking into 3d rendering in the 1990s found in scanline rendering techniques and its implementation in specialized hardware a convenient and profitable way to do the low quality 3d we all know from this era’s computer games and movies.

Hardly anyone, at least no one in a position of power, at a company like Google or Mapbox has the long term vision of a Donald Knuth or an Eduard Imhof. This is not only because they cannot attract such people to work for them but primarily because that would be extremely dangerous for the short term business success.

Mapzen has always presented itself as if it was less oriented for short term business goals than other companies and maybe it was and this contributed to its demise. But at the same time they did not have the timeless and radical ideas and the energy and vision to pursue them to create something like TeX or POV-Ray that could define them and give them a long term advantage over the big players like Google or Mapbox. What they produced were overwhelmingly products following the same short term trends as the other players do in a lemming-like fashion. Not without specific innovative ideas for sure but nothing radical that would actually make it stand out.

Mapzen published a lot of their work as open source software and this way tries to make sure it lives on after the company closes. This is no guarantee however. There are tons on open source programs dozing away in the expanses of the net no one looks at or uses any more.

While open sourcing development work is commendable and important for innovation and progress – TeX and POV-Ray as individual programs would have never lasted this long if they had not been open source – it is important to notice that the deciding factor ultimately is if there is actually

  • a substantially innovative idea being put forward,
  • this idea being consequently developed to its real potential,
  • this idea being implemented and demonstrated in practical use,
  • the idea being shared and communicated publicly and
  • the idea brings substantial cultural or technological advancement over pre-existing and near future alternatives – which unfortunately can, if at all, usually only be determined in retrospect.