Blog of Many Colours

Image featuring the sister cities of Sault Sainte Marie, Ontario, and Sault Sainte Marie, Michigan. ESA’s Proba satellite acquired this image on 11 August 2006 with its Compact High Resolution Imaging Spectrometer (CHRIS), designed to acquire hyperspectral images with a spatial resolution of 18 metres across an area of 14 kilometres. Data courtesy of SSTL through ESA.

Image featuring the sister cities of Sault Sainte Marie, Ontario, and Sault Sainte Marie, Michigan. ESA’s Proba satellite acquired this image on 11 August 2006 with its Compact High Resolution Imaging Spectrometer (CHRIS), designed to acquire hyperspectral images with a spatial resolution of 18 metres across an area of 14 kilometres. Data courtesy of SSTL through ESA.

The aspect of art at school that really stuck with me was learning about the main colours of the rainbow and how they fit together – like with like, such as yellow, green, blue, and like with unlike such as shades of green with a fleck of red to put spark into a picture. Based on these ideas, when I was a teenager I used to construct geometric mandalas coloured in with gouache. As I began studying remote sensing, it seemed natural that hyperspectral imaging would hold a special fascination.

The term Hyperspectral Imaging was coined by Goetz in 1985 and is defined as ‘the acquisition of images in hundreds of contiguous, registered, spectral bands such that for each pixel a radiance spectrum can be derived.’ Put simply, whereas a picture is made using three colour components for television (red, green and blue), for hyperspectral imaging the spectrum is split into many, sometimes hundreds, of different grades of colour for each part of the image. The term made its way into scientific language by way of the intelligence communities – the military became interested in it as it offered them the ability to tell plastic decoys from real metal tanks, as well as an object’s precise colour.

When the first field spectral measurements were conducted in the early 1970s, technology was not advanced enough for it to be put into operation. However, developments in electronics, computing and software throughout the 1980s and into the 1990s, brought the hyperspectral imaging to the EO community.

A series of parallel hardware development began in the 1980’s, such as at NASA JPL with the Airborne Imaging Spectrometer (AIS) in 1983, followed by AVIRIS (Airborne Visible/IR Imaging Spectrometer). The AVIRIS sensor was first flown in 1987 on a NASA aircraft at 20km altitude and to this day, it is still a key provider of high-quality HS data for the scientific community.

The hardware advances were matched by improvements in software capabilities, with the development of the iconic image cube method of handling this type of data, by PhD students Joe Boardman and Kathryn Kierein-Young, from the University of Colorado. Spectral libraries have been amassed for over 2,400 natural and artificial materials, to enable them to be identified. The most famous is the ASTER spectral library which includes inputs from Johns Hopkins University (JHU) Spectral Library, the Jet Propulsion Laboratory (JPL) Spectral Library, and the United States Geological Survey (USGS – Reston) Spectral Library.

Hyperspectral imaging was primarily developed for the mapping of soils and rock types; and the spectra of these are rich in character. Taking regions from the contiguous spectrum makes it possible to identify surface materials by reflectance or emission and also allows precise atmospheric correction which can only be approximated if you are using discrete, wide colour bands. The shape of the reflectance or emittance spectrum yields information about grain size, abundance and composition as well as the biochemistry of vegetation, such as the concentration of chlorophyll and other pigments and life forms in water bodies.

Earth observation hyperspectral imaging really began with NASA’s Earth Observing-1 Mission (EO-1) launched in 2000, with the Hyperion imager on board that has 200 wavelengths. Since then, various other missions have been launched such as the Compact High Resolution Imaging Spectrometer (CHRIS) on the Proba-1 satellite also in 2001, with 63 spectral bands; or the Infrared Atmospheric Sounding Interferometer (IASI) on board the MetOp series of Meteorological satellites whose first version was launched in 2006.

The coming years for hyperspectral imaging looks exciting with a whole series of planned missions including the Italian PRISMA (PRecursore IperSpettrale della Missione Applicativa), German EnMAP (Environmental Mapping and Analysis Program), NASA’s HyspIRI (Hyperspectral Infrared Imager), and JAXA’s (Japan Aerospace Exploration Agency) Hyperspectral Imager Suite (HIUSI).

So for me, and anyone with the same fascination, the future really will be filled with many colours!


Blog written by Dr Louisa Reynolds

Two Fantastic Remote Sensing Innovations

Aberdeenshire (Scotland) January 2016 flooding captured by Sentinel-1; Data courtesy of Copernicus/ESA

Aberdeenshire (Scotland) January 2016 flooding captured by Sentinel-1

Two academic remote sensing research announcements caught our eye this week. To be fair most remote sensing announcements catch our eye, but these two were intriguing as they are repurposing remote sensing techniques.

Remote Sensing the Human Body
Researchers at Kyoto University Centre of Innovation have developed a system based on spread-spectrum radar technology to remotely sense signals from the human body. They have focussed on heartbeats, although they acknowledge that other elements such as breathing and movement are also measured by the system. It uses a unique signal analysis algorithm to extract the beats of the heart from the radar signals, and then calculates the intervals to give the heartbeat.

Anyone who has ever needed to wear a Holter monitor for twenty-four or forty-eight hours will appreciate the advantage of having measurements taken remotely, in real time. In addition, under controlled conditions, the system has worked with a similar accuracy to an electrocardiographs (ECG). This will be music to the ears of regular ECG takers who know how much removing those sticky electrode pads can hurt!

This system is still at an early developmental stage and further testing and validation is necessary, but it offers a potential new use of remote sensing technology.

Remote Sensing & Social Media
Researchers from Pennsylvania State University have led a project developing an innovative way of combining social media and remote sensing. The research was undertaken on a flood in Boulder, Colorado in September 2013 with a particular focus on urban locations.

The team identified over 150,000 flood related tweets and used a cloud-based geo-social networking application called CarbonScanner, from The Carbon Project, to cluster the pictures from Twitter and Flickr to identify flooding hotspots. These were then used to obtain optical data, in this case from the high resolution commercial satellite Worldview 2 and the lower resolution, but freely available, Landsat 8.

A machine learning algorithm was developed to perform a semi-automated classification to identify individual pixels that contained water. As the data was optical it used the near infrared band as, due to its strong absorption, water is easily distinguishable from soil and vegetation. The researchers believe that this methodology has the potential to give emergency teams near real-time data, which could make live-saving differences to their work.

This is a particularly interesting development for us, given our current work on flood-mapping using synthetic aperture radar (SAR) data as part of the Space for Smarter Government Programme.

These two current examples show that remote sensing is an exciting, innovative and developing field, and one that is not solely related to Earth observation.

Sentinel-2A Data Released Into The Wild

False Colour Image of Qingdao, China, acquired by Sentinel-2A on the 21st August 2015. Data courtesy of ESA.

False Colour Image of Qingdao, China, acquired by Sentinel-2A on the 21st August 2015. Data courtesy of ESA.

Sentinel -2A is already producing some fantastic images, and last week ESA announced the availability of Sentinel-2A orthorectified products in the Sentinel Data Hub. This will enable Sentinel-2 data to be accessed more widely, although as we found out this week there are still a few teething problems to sort out.

At the top of the blog is a stunning image of the Chinese city of Qingdao, in the eastern Shangdong province. The false colour image shows the city of Qingdao and the surrounding area with the centre dominated by Jiaozhou Bay, which is natural inlet to the Yellow Sea. The bay is 32 km long and 27 km wide, and generally has a depth of around ten to fifteen metres; although there are deeper dredged channels to allow larger ships to enter the local ports. The bay itself has decreased by around 35% since 1928, due to urban and industrial growth in the area.

Jiaozhou Bay Bridge a sub-set of a false colour image of Qingdao, China, acquired by Sentinel-2A on the 21st August 2015. Data courtesy of ESA.

Jiaozhou Bay Bridge a sub-set of a false colour image of Qingdao, China, acquired by Sentinel-2A on the 21st August 2015. Data courtesy of ESA.

There is a tenuous linguistic link between Plymouth, where Pixalytics is based, and Qingdao. Plymouth is branded as Britain’s Ocean City and Qingdao is home to the Ocean University of China. Qingdao does however, have a much greater claim to fame. It is home to the World’s Longest Bridge. The Jiaozhou Bay Bridge is 42 km long and transects the bay. It is clearly visible on the satellite image, although you might not be able to see it on the thumbnail image at the top of the blog. Therefore, if you look at the subset to the right, you should be able to see bridge clearly and boats on the bay.

Now Sentinel-2A data has been released into the Sentinel Data Hub, images like this are waiting for everyone in the world to discover. We’ve been testing Sentinel-2A data for a few months already, as were part of the community who gave feedback to ESA on the quality of the data. Sentinel-2A carries a Multispectral Imager (MSI) that has 13 spectral bands with 4 visible and near infra-red spectral bands with a spatial resolution of 10 m, 6 short wave infrared spectral bands with a spatial resolution of 20 m and 3 atmospheric correction bands with a spatial resolution of 60 m. When the identical Sentinel-2B is launched in late 2016, the pair will offer a revisit time of only 5 days.

The data from Sentinel-2A forms part of the Copernicus program and is freely available to use, as such it is bound to be very popular. So popular in fact, we found it difficult to get on the Data Hub this week, with slow data speeds and a few elements of the functionality not working efficiently. Although, we’re sure that these will be resolved quickly. Also, there are user guides and tutorials available on the website to help people use the data hub.

The Sentinel-2A data release, following on from the microwave data from Sentinel-1, is a watershed moment for Earth Observation companies, given their spatial resolution, revisit time and free availability, they offer a unique opportunity to develop satellite data services. We’re intending to use this data, are you?

How Many Earth Observation Satellites in Orbit in 2015?

Artist's rendition of a satellite - mechanik/123RF Stock Photo

Artist’s rendition of a satellite – mechanik/123RF Stock Photo

If you’d like the update for 2016, please click here.

Following last week’s blog on the number of satellites orbiting the Earth, this week we’re focussing on Earth observation (EO) satellites. According to the Union of Concerned Scientists database, there were 333 active EO satellites on the 31st August 2015.

Examining these numbers further, reveals almost half have a purpose defined as providing optical imaging, with meteorological satellites account for another 13% and 10% providing radar imaging. There is also a small group with the generic purpose of Earth Science; however, more interestingly is the category of Electric Intelligence. Over 20% of EO satellites have this category, and these satellites have exclusively Military users; there are four countries with these satellites, the USA has the most followed by China, then Russia and France. Who knows what exactly they do?

Of the 333 active EO satellites, 290 are in low earth orbits, 34 in geostationary orbits and 9 are in an elliptical orbit. The oldest EO satellite still operational is the Satélite de Coleta de Dados (SCD) 1 which was launched in 1993; it’s a Brazilian satellite providing environmental data. Unsurprisingly, over half the active EO satellites were launched in the last five years, although this does include Planet Lab’s twenty-eight strong Flock-1 constellation launched in 2014 and 2015, they provide imagery with a spatial resolution between 3 and 5 m.

Picking up on the launch sites we looked at last week. The most popular launch site for EO satellites is the Vandenberg Air Force Base in Lompoc, California, followed by the two Chinese sites of Taiyuan Launch Centre and Jiuquan Satellite Launch Centre. The top five is completed with the Baikonur Cosmodrome in Kazakhstan, and Cape Canaveral in Florida; although it is worth noting that 22 of Flock-1 constellation were launched from the International Space Station.

In terms of numerical supremacy, the USA controls 34% of all EO satellites, China is next with 21% and then Japan with 6.3%. The UK is listed as controlling only 1 satellite, DMCii’s wide imaging DMC-2 satellite; although, we’ve also participated in 8 of the listed European Space Agency (ESA) EO satellites.

In terms of the future, we’re expecting both Jason-3 and Sentinel-3A to be launched later this year. 2016 could see a variety of launches including ESA’s Sentinel-1B and 2B, cloud, aerosol and radiation mission Earthcare and the ADM-Aeolus Wind satellite; DigitalGlobe’s commercial Worldview 4 satellite that will have a panchromatic resolution of 30 cm and multispectral resolution of 1.20 m; and Japan’s Advanced Land Observing Satellite, ALOS-3.

As we often say, it’s an exciting time to be part of Earth observation! Why not get involved?

Sentinel-2A dips its toe into the water

Detailed image of algal bloom in the Baltic Sea acquired by Sentinel-2A on 7 August 2015. Data courtesy of Copernicus Sentinel data (2015)/ESA.

Detailed image of algal bloom in the Baltic Sea acquired by Sentinel-2A on 7 August 2015. Data courtesy of Copernicus Sentinel data (2015)/ESA.

With spectacular images of an algal bloom in the Baltic Sea, ESA’s Sentinel-2A has announced its arrival to the ocean colour community. As we highlighted an earlier blog, Sentinel-2A was launched in June predominately as a land monitoring mission. However, given it offers higher resolution data than other current marine focussed missions; it was always expected to dip it’s toe into ocean colour. And what a toe it has dipped!

The images show a huge bloom of cyanobacteria in the Baltic Sea, with the blue-green swirls of eddies and currents. The image at the top of the blog shows the detail of the surface floating bloom caught in the currents, and there is a ship making its way through the bloom with its wake producing a straight black line as deeper waters are brought to the surface.

Algal bloom in the Baltic Sea acquired by Sentinel-2A on 7 August 2015. Data courtesy of Copernicus Sentinel data (2015)/ESA.

Algal bloom in the Baltic Sea acquired by Sentinel-2A on 7 August 2015. Data courtesy of Copernicus Sentinel data (2015)/ESA.

To the right is a wider view of the bloom within the Baltic Sea. The images were acquired on the 7th August using the Multispectral Imager, which has 13 spectral bands and the visible, which were used here, have a spatial resolution of 10 m.

The Baltic Sea has long suffered from poor water quality and in 1974 it became the first entire sea to be subject to measures to prevent pollution, with the signing of the Helsinki Convention on the Protection of the Marine Environment of the Baltic Sea Area. Originally signed by the Baltic coastal countries, a revised version was signed by the majority of European countries in 1992. This convention came into force into force on the 17th January 2000 and is overseen by the Helsinki Commission – Baltic Marine Environment Protection Commission – also known as HELCOM. The convention aims to protect the Baltic Sea area from harmful substances from land based sources, ships, incineration, dumping and from the exploitation of the seabed.

Despite the international agreements, the ecosystems of the Baltic Sea are still threatened by overfishing, marine and chemical pollution. However, the twin threats that cause the area to suffer from algal blooms are warm temperatures and excessive levels of nutrients, such as phosphorus and nitrogen. This not only contributes towards the algal blooms, but the Baltic Sea is also home to seven of the world’s ten largest marine dead zones due to the low levels of oxygen in the water, which prevent marine life from thriving.

These images certainly whet the appetite of marine remote sensors, who also have Sentinel-3 to look forward to later this year. That mission will focus on sea-surface topography, sea surface temperature and ocean colour, and is due to the launched in the last few months of 2015. It’s an exciting time to be monitoring and researching the world’s oceans!

5 Things We’ve Learnt Preparing For Our First Exhibition & the 1 Thing We Haven’t!

GlobePixalytics is becoming a conference exhibitor! After years of attending conferences, we decided, for the first time, to become an exhibitor. We are undertaking two exhibitions this year, and our first is GEO Business 2015 taking place later this month on the 27th and 28th at the Business Design Centre, in London. As complete novices in the exhibition world, we’ve had an interesting learning curve. Here are five lessons we’ve learnt during our preparation, and the one thing we still don’t know.

  1. Everything Costs! We bought an exhibition space, which has three walls and our name above it. We knew we’d have fill the shell to create the stand, but hadn’t realised exactly what this meant. It’s obvious now, but we hadn’t thought about the need to have electricity connected on the stand, various options for furniture, hiring equipment, getting things to our stand and how you actually attach items to the stand. We discovered that there is a solution to these, and numerous other things, but they all have a cost. Buying the stand space is only the start, and this has made us rethink everything from stand design to our travel arrangements.
  2. Stand Design. We knew we couldn’t compete with the big firms with their cappuccino machines, freshly baked cakes and leather chairs. We had to go for something different, and so we’ve attempted to create interesting, intriguing, slightly vintage and cost effective stand (see lesson 1!). If you are at GEO Business come along and tell us what you think. As a sneak preview, the blog picture is part of our stand.
  3. Promotional Items. You need to have promotional items, freebies and things to hand out; but the question is what? We wanted items that were interesting, promoted us and ideally would make it back to the desks of potential customers. We discounted novelty items, expensive items (see lesson 1!) and unwrapped sweets (you never known where people’s hands have been!). We’ve settled for pens (useful, and might make it back to desks) and postcards (interesting and promoting us); wrapped sweets are still being debated, you’ll have to come onto the stand to find out the decision.
  4. Talk To People, Not The Internet. A lot of the exhibition preparation can be done on the internet and by email, but we had lots of questions. We found it was far easy to talk to people, rather than simply fill out forms. We gained a lot of information by talking to the conference organising team (thank you Danielle) the company hiring the audio-visual equipment were helpful and our promotional material suppliers (Adam from Redrok was great!).
  5. Expect Phone Calls. We got a lot of phone calls once our participation was on the exhibition website, all of which were trying to sell us something! The most surprising were the numerous, and we do mean numerous, calls we’ve had offering us discounted hotel rooms.

So these are the five things we’ve learnt in our preparation, and I’m sure there will be more to learn during the stand construction and the exhibition itself. So what about the one thing we haven’t learnt? The thing we have no idea about is whether all of this effort will be worth it.

So a question for all experienced exhibitioners, how do you decide if an exhibition stand has been worthwhile? Is it the number of business cards collected, number of people spoken to, amount of publicity generated or is it about the amount of new work generated? Drop us a comment, or a tweet to @pixalytics, telling us how you measure exhibition success.

If you are coming to GEO Business 2015, please drop by the stand and say hello.

How to Measure Heights From Space?

Combining two Sentinel-1A radar scans from 17 and 29 April 2015, this interferogram shows changes on the ground that occurred during the 25 April earthquake that struck Nepal. Contains Copernicus data (2015)/ESA/Norut/PPO.labs/COMET–ESA SEOM INSARAP study

Combining two Sentinel-1A radar scans from 17 and 29 April 2015, this interferogram shows changes on the ground that occurred during the 25 April earthquake that struck Nepal. Contains Copernicus data (2015)/ESA/Norut/PPO.labs/COMET–ESA SEOM INSARAP study

Accurately measuring the height of buildings, mountains or water bodies is possible from space. Active satellite sensors send out pulses of energy towards the Earth, and measure the strength and origin of the energy received back enabling them to determine of the heights of objects struck by the pulse energy on Earth.

This measurement of the time it takes an energy pulse to return to the sensor, can be used for both optical and microwave data. Optical techniques such as Lidar send out a laser pulse; however within this blog we’re going to focus on techniques using microwave energy, which operate within the Ku, C, S and Ka frequency bands.

Altimetry is a traditional technique for measuring heights. This type of technique is termed Low Resolution Mode, as it sends out a pulse of energy that return as a wide footprint on the Earth’s surface. Therefore, care needs to be taken with variable surfaces as the energy reflected back to the sensor gives measurements from different surfaces. The signal also needs to be corrected for speed of travel through the atmosphere and small changes in the orbit of the satellite, before it can be used to calculate a height to centimetre accuracy. Satellites that use this type of methodology include Jason-2, which operates at the Ku frequency, and Saral/AltiKa operating in the Ka band. Pixalytics has been working on a technique to measure river and flood water heights using this type of satellite data. This would have a wide range of applications in both remote area monitoring, early warning systems, disaster relief, and as shown in the paper ‘Challenges for GIS remain around the uncertainty and availability of data’ by Tina Thomson, offers potential for the insurance and risk industries.

A second methodology for measuring heights using microwave data is Interferometric Synthetic Aperture Radar (InSAR), which uses phase measurements from two or more successive satellite SAR images to determine the Earth’s shape and topography. It can calculate millimetre scale changes in heights and can be used to monitor natural hazards and subsidence. InSAR is useful with relatively static surfaces, such as buildings, as the successive satellite images can be accurately compared. However, where you have dynamic surfaces, such as water, the technique is much more difficult to use as the surface will have naturally changed between images. Both ESA’s Sentinel-1 and the CryoSat-2 carry instruments where this technique can be applied.

The image at the top of the blog is an interferogram using data collected by Sentinel-1 in the aftermath of the recent earthquake in Nepal. The colours on the image reflect the movement of ground between the before, and after, image; and initial investigations from scientists indicates that Mount Everest has shrunk by 2.8 cm (1 inch) following the quake; although this needs further research to confirm the height change.

From the largest mountain to the smallest changes, satellite data can help measure heights across the world.

Ocean Colour Cubes

August 2009 Monthly Chlorophyll-a Composite; data courtesy of the ESA Ocean Colour Climate Change Initiative project

August 2009 Monthly Chlorophyll-a Composite; data courtesy of the ESA Ocean Colour Climate Change Initiative project

It’s an exciting time to be in ocean colour! A couple of weeks ago we highlighted the new US partnership using ocean colour as an early warning system for harmful freshwater algae blooms, and last week a new ocean colour CubeSat development was announced.

Ocean colour is something very close to our heart; it was the basis of Sam’s PhD and a field of research she is highly active in today. When Sam began studying her PhD, Coastal Zone Color Scanner (CZCS) was the main source of satellite ocean colour data, until it was superseded by the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) that became the focus of her role at Plymouth Marine Laboratory.

Currently, there are a number ocean colour instruments in orbit:

  • NASA’s twin MODIS instruments on the Terra and Aqua satellites
  • NOAA’s Visible Infrared Imager Radiometer Suite (VIIRS)
  • China’s Medium Resolution Spectral Imager (MERSI), Chinese Ocean Colour and Temperature Scanner (COCTS) and Coastal Zone Imager (CZI) onboard several satellites
  • South Korea’s Geostationary Ocean Color Imager (GOCI)
  • India’s Ocean Colour Monitor on-board Oceansat-2

Despite having these instruments in orbit, there is very limited global ocean colour data available for research applications. This is because the Chinese data is not easily accessible outside China, Oceansat-2 data isn’t of sufficient quality for climate research and GOCI is a geostationary satellite so the data is only for a limited geographical area focussed on South Korea. With MODIS, the Terra satellite has limited ocean colour applications due to issues with its mirror and hence calibration; and recently the calibration on Aqua has also become unstable due to its age. Therefore, the ocean colour community is just left with VIIRS; and the data from this instrument has only been recently proved.

With limited good quality ocean colour data, there is significant concern over the potential loss of continuity in this valuable dataset. The next planned instrument to provide a global dataset will be OLCI onboard ESA’s Sentinel 3A, due to be launched in November 2015; with everyone having their fingers crossed that MODIS will hang on until then.

Launching a satellite takes time and money, and satellites carrying ocean colour sensors have generally been big, for example, Sentinel 3A weighs 1250 kg and MODIS 228.7 kg. This is why the project was announced last week to build two Ocean Colour CubeSats is so exciting; they are planned to weigh only 4 kg which reduces both the expense and the launch lead time.

The project, called SOCON (Sustained Ocean Observation from Nanosatellites), will see Clyde Space, from Glasgow in the UK, will build an initial two prototype SeaHawk CubeSats with HawkEye Ocean Colour Sensors, with a ground resolution of between 75 m and 150 m per pixel to be launched in early 2017. The project consortium includes the University of North Carolina, NASA’s Goddard Space Flight Centre, Hawk Institute for Space Sciences and Cloudland Instruments. The eventual aim is to have constellations of CubeSats providing a global view of both ocean and inland waters.

There are a number of other planned ocean colour satellite launches in the next ten years including following on missions such as Oceansat-3, two missions from China, GOCI 2, and a second VIIRS mission.

With new missions, new data applications and miniaturised technology, we could be entering a purple patch for ocean colour data – although purple in ocean colour usually represents a Chlorophyll-a concentration of around 0.01 mg/m3 on the standard SeaWiFS colour palette as shown on the image at the top of the page.

We’re truly excited and looking forward to research, products and services this golden age may offer.

Ocean Colour Partnership Blooms

Landsat 8 Natural Colour image of Algal Blooms in Lake Erie acquired on 01 August 2014. Image Courtesy of NASA/USGS.

Landsat 8 Natural Colour image of Algal Blooms in Lake Erie acquired on 01 August 2014. Image Courtesy of NASA/USGS.

Last week NASA, NOAA, USGS and the US Environmental Protection Agency announced a $3.6 million partnership to use satellite data as an early warning system for harmful freshwater algae blooms.

An algae bloom refers to a high concentration of micro algae, known as phytoplankton, in a body of water. Blooms can grow quickly in nutrient rich waters and potentially have toxic effects. Shellfish filter large quantities of water and can concentrate the algae in their tissues, allowing it to enter the marine food chain and potentially causing a risk to human consumption. Blooms can also contaminate drinking water. For example, last August over 40,000 people were banned from drinking water in Toledo, Ohio, after an algal bloom in Lake Erie.

The partnership will use the satellite remote sensing technique of ocean colour as the basis for the early warning system.  Ocean colour isn’t a new technique, it has been recorded as early as the 1600s when Henry Hudson noted in his ship’s log that a sea pestered with ice had a black-blue colour.

Phytoplankton within algae blooms are microscopic, some only 1,000th of a millimetre in size, and so it’s not possible to see individual organisms from space. Phytoplankton contain a photosynthetic pigment visible with the human eye, and in sufficient quantities this material can be measured from space. As the phytoplankton concentration increases the reflectance in the blue waveband decreases, whilst the reflectance in the green waveband increases slightly. Therefore, a ratio of blue to green reflectance can be used to derive quantitative estimates of the concentration of phytoplankton.

The US agency partnership is the first step in a five-year project to create a reliable and standard method for identifying blooms in US freshwater lakes and reservoirs for the specific phytoplankton species, cyanobacteria. To detect blooms it will be necessary to study local environments to understand the factors that influence the initiation and evolution of a bloom.

It won’t be easy to create this methodology as inland waters, unlike open oceans, have a variety of other organic and inorganic materials suspended in the water through land surface run-off, which will also have a reflectance signal. Hence, it will be necessary to ensure that other types of suspended particulate matter are excluded from the prediction methodology.

It’s an exciting development in our specialist area of ocean colour. We wish them luck and we’ll be looking forward to their research findings in the coming years.

Lidar: From space to your garage and pocket

Lidar data overlaid on an aerial photo for Pinellas Point, Tampa Bay, USA. Data courtesy of the NASA Experimental Airborne Advanced Research Lidar (EAARL), http://gulfsci.usgs.gov/tampabay/data/1_lidar/index.html

Lidar data overlaid on an aerial photo for Pinellas Point, Tampa Bay, USA. Data courtesy of the NASA Experimental Airborne Advanced Research Lidar (EAARL), http://gulfsci.usgs.gov/tampabay/data/1_lidar/index.html

Lidar isn’t a word most people use regularly, but recent developments in the field might see a future where is becomes part of everyday life.

Lidar, an acronym for LIght Detection And Ranging, was first developing in the 1960’s and is primarily a technique for measuring distance; however, other applications include atmospheric Lidar which measures clouds, particles and gases such as ozone. The system comprises of a laser, a scanner and GPS position receiving, and it works by emitting a laser pulse towards a target, and measuring the time it takes for the pulse to return.

There are two main types of Lidar used within remote sensing for measuring distance, topographic and bathymetric; topographic Lidar uses a near infrared laser to map land, while bathymetric Lidar uses water-penetrating green light to measure the seafloor. The image at the top of the blog is a bathymetric Lidar overlaying an aerial photograph Pinellas Point, Tampa Bay in the USA, showing depths below sea level in metres. Airborne terrestrial Lidar applications have also been expanded to include measuring forest structures and tree canopies mapping; whilst there’s ground based terrestrial laser scanners for mapping structures such as buildings.

As a user getting freely accessible airborne Lidar data isn’t easy, but there are some places that offer datasets including:

Spaceborne terrestrial Lidar has been limited, as it has to overcome a number of challenges:

  • It’s an active remote sensing technique, which means it requires a lot more power to run, than passive systems and for satellites this means more cost.
  • It’s an optical system that like all optical systems is affected by cloud cover and poor visibility, although interestingly it works more effectively at night, as the processing doesn’t need to account for the sun’s reflection.
  • Lidar performance decreases with inverse square of the distance between the target and the system.
  • Lidar collects individual points, rather than an image, and images are created by combining lots of individual points. Whilst multiple overflies are possible quickly in a plane, with a satellite orbiting the Earth you’re effectively collecting lines of points over a number of days, which takes time.

The only satellite that studied the Earth’s surface using Lidar is NASA’s Ice, Cloud and Land Elevation Satellite – Geoscience Laser Altimeter system (IceSAT-GLAS); launched in 2003, it was decommissioned in 2010. It measured ice sheet elevations and changes, together with cloud and aerosol height profiles, land elevation and vegetation cover, and sea ice thickness; and you find its data products here. IceSAT-GLAS 2 is scheduled for launch in 2017. The Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO), part of the satellite A-Train, is a joint NASA and CNES mission launched in 2006. Originally designed as an atmospheric focused Lidar, it has since developed marine applications that led to the SABOR campaign we discussed in previous blog.

Beyond remote sensing, Lidar may become part of every household in the future, if recent proof-of-concepts come to fruition. The Google self-drive car uses a Lidar as part of its navigation system to generate a 3D maps of the surrounding environment. In addition, research recently published in Optics Express, by Dr. Ali Hajimiri of California Institute of Technology has described the potential of a tiny Lidar device capable of turning mobile phones into 3D scanning devices. Using a nanophotonic coherent imager, the proof-of-concept device has put together a 3-D image of the front of a U.S. penny from half a meter away, with 15-μm depth resolution and 50-μm lateral resolution.

Lidar has many remote sensing and surveying applications, however, in the future we all could have lasers in our garage and pockets.