Chikungunya is a mosquito-borne viral disease first described during an outbreak in southern Tanzania in 1952. It is an RNA virus that belongs to the alphavirus genus of the family Togaviridae. The name “chikungunya” derives from a word in the Kimakonde language, meaning “to become contorted”, and describes the stooped appearance of sufferers with joint pain (arthralgia). Recently it has been much evident in Dhaka city like other areas of Bangladesh.
Institute of Epidemiology, Disease Control and Research or IEDCR gave a list of the locations while presenting a research paper at the Secretariat on Thursday. The areas are Dhanmondi 32, Sector 4 and Sector 9 of Uttara, Maddhya Badda, Gulshan 1, Lalmatia, Pallabi, Moghbazar, Malibagh Chowdhury Para, Rampura, Tejgaon, Banani, Noyatola, Kuril, Pirerbag, Rayerbazar, Shyamoli, Monipuripara, Mohammadpur, Mohakhali, Mirpur-1 and Korail slum.
- Most people infected with chikungunya virus will develop some symptoms.
- Symptoms usually begin 3–7 days after being bitten by an infected mosquito.
- The most common symptoms are fever and joint pain.
- Other symptoms may include headache, muscle pain, joint swelling, or rash.
- Chikungunya disease does not often result in death, but the symptoms can be severe and disabling.
- Most patients feel better within a week. In some people, the joint pain may persist for months.
- People at risk for more severe disease include newborns infected around the time of birth, older adults (≥65 years), and people with medical conditions such as high blood pressure, diabetes, or heart disease.
- Once a person has been infected, he or she is likely to be protected from future infections.
- The symptoms of chikungunya are similar to those of dengue and Zika, diseases spread by the same mosquitoes that transmit chikungunya.
- See your healthcare provider if you develop the symptoms described above and have visited an area where chikungunya is found.
- If you have recently traveled, tell your healthcare provider when and where you traveled.
- Your healthcare provider may order blood tests to look for chikungunya or other similar viruses like dengue and Zika.
- There is no vaccine to prevent or medicine to treat chikungunya virus.
- Treat the symptoms:
- Get plenty of rest.
- Drink fluids to prevent dehydration.
- Take medicine such as acetaminophen (Tylenol®) or paracetamol to reduce fever and pain.
- Do not take aspirin and other non-steroidal anti-inflammatory drugs (NSAIDS until dengue can be ruled out to reduce the risk of bleeding).
- If you are taking medicine for another medical condition, talk to your healthcare provider before taking additional medication.
- If you have chikungunya, prevent mosquito bites for the first week of your illness.
- During the first week of infection, chikungunya virus can be found in the blood and passed from an infected person to a mosquito through mosquito bites.
- An infected mosquito can then spread the virus to other people.
Source: WHO; CDC
According to satellite imagery, Mora’s maximum surface sustained winds were about 83 km/h (52 mph) with gusts to 102 km/h (63 mph). The estimated central pressure was about 992 hPa.
The system is likely to intensify further into a Severe Cyclonic Storm during the next 12 hours, the center said. It is very likely to move north-northeastwards and cross Bangladesh coast near Chittagong on May 30 (morning, local time).
Bangladesh meteorological department’s special weather bulletin: sl. No. 14 (fourteen), [date: 30.05.2017]
The severe cyclonic storm ‘MORA’ (ecp 990 HPA) over north Bay and adjoining east central Bay moved slightly northwards and lies over the same area (near lat 19.5°n and long 91.3°e) and was centred at midnight last night (the 29 may 2017) about 305 kms south of Chittagong port, 230 kms south of Cox’s bazar port, 380 kms South-Southeast of Mongla port and 300 kms South-Southeast of Payra port. It is likely to intensify further, move in a northerly direction and may cross Chittagong – Cox’s bazar coast by morning of 30 may 2017.
Under the peripheral influence of severe cyclonic storm ‘MORA’ gusty/squally wind with rain/ thunder showers is likely to continue over north bay and the coastal districts and maritime ports of Bangladesh.
Maximum sustained wind speed within 64 kms of the cyclone centre is about 89 kph rising to 117 kph in gusts/squalls. Sea will remain high near the system.
Maritime ports of Chittagong and Cox’s bazar have been advised to keep hoisted great danger signal nubmer ten (r) ten.
Coastal districts of Chittagong, Cox’s bazar, Noakhali, Laxmipur, Feni, Chandpur and their offshore islands and chars will come under danger signal number ten (r) ten.
Maritime ports of Mongla and Payra have been advised to keep hoisted great danger signal nubmer eight (r) eight.
Coastal districts of Bhola, Borguna, Patuakhali, Barisal, Pirozpur, Jhalokathi, Bagherhat, Khulna, Satkhira and their offshore islands and chars will come under danger signal number eight (r) eight.
Under the influence of the severe cyclonic storm ‘mora’ the low-lying areas of the coastal districts of Cox’s bazar, Chittagong, Noakhali, Laxmipur, Feni, Chandpur, Borguna, Bhola, Patuakhali, Barisal, Pirozpur, Jhalokathi, Bagherhat, Khulna, Satkhira and their offshore islands and chars are likely to be inundated by storm surge of 4-5 feet height above normal astronomical tide.
The coastal districts of Cox’s bazar, Chittagong, Noakhali, Laxmipur, Feni, Chandpur, Borguna, Bhola, Patuakhali, Barisal, Pirozpur, Jhalokathi, Bagherhat, Khulna, Satkhira and their offshore islands and chars are likely to experience wind speed up to 89-117 kph in gusts/ squalls with heavy to very heavy falls during the passage of the severe cyclonic storm.
All fishing boats and trawlers over north bay and deep sea have been advised to remain in shelter till further notice.
Forecast Path : The red-shaded area denotes the potential path of the center of the tropical cyclone. Note that impacts (particularly heavy rain, high surf, coastal flooding) with any tropical cyclone may spread beyond its forecast path.
Torrential rainfall is expected along, north and to the east of the track over eastern Bangladesh, northeast India and western Myanmar, extending northward to the foothills of the Himalayas. This includes the Bangladesh capital of Dhaka, home to over 10 million, one of the world’s most densely populated cities.
This heavy rainfall extending well inland could trigger life-threatening flooding and, in mountainous areas, mudslides.
Rainfall Potential Through Wednesday
Of the 12 tropical cyclones on record that have claimed at least 100,000 lives, eight of those formed in the Bay of Bengal, according to Weather Underground.
One of these, the infamous Great Bhola Cyclone, killed at least 300,000 in November 1970, the world’s deadliest tropical cyclone of record.
In more recent times, Cyclone Nargis in 2008 devasted the Irrawaddy Delta region of Myanmar, claiming at least 130,000 lives.
Less intense storms have also been very deadly in the region.
In 2015, a tropical storm-strength cyclone, Cyclone Komen, hovered near the coast of Bangladesh and brought flooding rain to six countries that killed nearly 500 people. Cyclone Komen made weeks of heavy rainfall even worse as landslides occurred in Myanmar, and more than a million people were evacuated or displaced from Myanmar alone.
Source: weather.com; bmd.gov.bd; watchers news
Since the beginning of the human species, we have been at the whim of Mother Nature. Her awesome power can destroy vast areas and cause chaos for the inhabitants. The use of satellite data to monitor the Earths surface is becoming more and more essential. Of particular importance are the disasters and hurricane monitoring systems that can help people to identify damage in remote areas, measure the consequences of the events, and estimate the overall damage to a given area. From a computing perspective, such an important task needs to be implemented to assist in various situations.
To analyze and estimate the effects of a disaster, we use high-resolution, satellite imagery from an area of interest. This can be obtained from Google Earth. We can also get free OSM vector data that has a detailed ground truth mask of houses. This is the latest vector zip from New York (Figure 1).
Next, we rasterize (convert from vector to raster) the image using a tool from gdal, called gdal_rasterize. As a result we have acquired a training and testing dataset from Long Island (Figure 2).
We apply a deep learning framework Caffe for training purposes and the learning model of Convolutional Neural Networks (CNN):
The derived neural net enables us to identify the predicted houses from the target area after the event (Figure 4). We can also use data from another similar area which hasn’t been damaged for CNN learning (if we can’t access the data for the desired territory).
We work with predicates of buildings using vectorization (extracting a contour and then converting lines to polygons) (Figure 5).
Also, we need to compute the intersection of the obtained predicate vector and the original OSM vector (Figure 6). This task can be accomplished by creating a new filter, dividing the square of the predicate buildings by the original OSM vector. Then, we filter the predictive houses by applying a threshold of 10%. This means that if the area of houses in green (Figure 6) is 10% less than the area in red, the real buildings have been destroyed.
Using the 10%-area threshold we can remove the houses that have been destroyed and get a new map that displays existing buildings (Figure 7). By computing the difference between the pre- and post- disaster masks, we obtain a map of the destroyed buildings (Figure 8).
We have to remember that the roofs of the houses are represented as flat structures in 2D-images. This is an important feature that can also be used to filter input images. A local Laplace filter is a great tool for classifying flat and rough surfaces (Figure 9). The first image has to be a 4-channel image with the fourth Alpha-channel that describes no-data-value pixels in the input image. The second image (img1) is the same, a 3-channel RGB image.
Applying this tool lets you get the map of the flat surface. Let’s look at the new mask of the buildings which have flat and rough textures (Figure 10) after combining this filter and extracting the vector map.
A robust library of the OpenCV computer vision has a denoising filter that helps remove noise from the flat buildings masks (Figure 11, 12).
Next, we apply filters to extract the contours and convert the lines into the polygons. This enables us to get new building recognition results (Figure 13).
We compute the area of an intersection vector mask obtained from the filter and a ground truth OSM mask and use a 14% threshold to reduce false positives (Figure 14).
As a result, we can see a very impressive new mask that describes houses that have survived the hurricane (Figure 15) and a vector of the ruined buildings (Figure 16).
After we have found the ruined houses, we can also pinpoint their location. For this task OpenStreetMap comes in handy. We have installed an OSM plugin in QGis and added an OSM layer to the canvas (Figure 17). Then, we added a layer with the destroyed houses and we can see all their addresses. If we want to get a file with the full addresses of the destroyed buildings we have to:
- In QGis use Vector / OpenStreetMap / Download the data and select the images with the desired information.
- Then in QGis use Vector / OpenStreetMap / Import a topology from XML and generate a DataBase from the area of interest.
- QGis / Vector / Export the topology to Spatialite and select all the required attributes. (Figure 18)
As a result, we can get a full list, with addresses, of the destroyed buildings (Figure 19).
If we compare these two different approaches to building recognition, we notice that the CNN-based method has 78% accuracy in detecting destroyed houses, whereas the Laplace filter reaches 96.3% accuracy in recognizing destroyed buildings. As for the recognition of existing buildings, the CNN approach has a 93% accuracy, but the second method has a 97.9 % detection accuracy. So, we can conclude that the flat surface recognition approach is more efficient than the CNN-based method.
The demonstrated method can immediately be very useful and let people compute the extent of damage in a disaster area, including the number of houses destroyed and their locations. This would significantly help while estimating the extent of the damage and provide more precise measurements than currently exist.
Source: Dariia Gordiiuk from Earth Observatory System (EOS)
HOT supports the collaborative map for emergency response while any disaster strike anywhere in the world. With time HOT has been speeding its wings in humanitarian arena from community development, partnering with different agencies, technical tools and platform development. Like the great collaborative mapping project tool Tasking manger HOT also developed HOT Training center under which HOT has developed a curriculum for its volunteer ‘Activators’; specifically a protocol and training program to empower those people who coordinate our Disaster Mapping. Key roles were identified during initial development and observation of the Nepal Activation. The HOTActivationProtocol.pdf and the bulk of the raw training material was drafted through an ‘Activation Sprint’ workshop of core HOT coordinators in Washington DC, 27-29 April 2015. The curriculum was then further developed by specialist, Russell Deffner, under the supervision of Technical Project Manager, Mhairi O’Hara; with community input and direct involvement of existing HOT Activators and the Activation Working Group.
we from Bangladesh
After the first workshop for the HOT Activation Curriculum what was held during the pre-conference events of the first Africa Open Data Conference. Hosted by the World Bank and the United Republic of Tanzania in the capital city, Dar Es Salaam, the second workshop of HOT Activation was held in Jakarta, Indonesia where I attended. With state of the art arrangement Russel Deffner and Mhairi O’Hara conducted the three days sessions. I (Ahasanul Hoque) and Sajjad Hossen attended the Humanitarian OpenStreetMap Team (HOT)Activation Workshop with other participants from Nepal, Philipine, Indonesia. Total we were 11 participants started at 9am in 18 September at Meeting room of ibis Tamarin ,Jakarta Hotel. After ice breaking and introducing session of MHairi , Russel had started with the presentation for inception, introducing activation protocol, 3 phases, event size determination, roles, essential courses, attending procedure, getting badges etc. After that each us chosen one course to complete by answering the multiple choice questions after reviewing the given materials and related links. Second day we chosen another course like before and all these course are actually connected with roles for simulation exercise as well as real life disaster events activation. And finally we all chosen different roles for simulation exercise where Manning Samble was acting as Activation Lead, myself Imagery, Sajjad Tasking Manager, Faysal-Public Relation, Vasanti-Community Care, Megha-Data, Pratik-Partner Liasion, Yantisa – Usability, Harry- Reporting. All of us performed their own role nicely with demo mail threads what gave us the feeling of a real activation.
Overall I would say the goal of HOT and the activation workshop was fulfilled, means they are successful to share the knowledge of activation roles to the mappers what build the capacity of the participants for future disaster events. From my point of view the curriculum was perfect for the participants , it will bulild the number of volunteers- that’s fine but for better going the martials should be delivered at least 1 week prior to the workshop. Another point I should mention that from the simulation exercise we realized that before taking any role the persons’ background and skill matters. I think the sequence of the course materials is perfect, self-explaining, some questions’ need to re-word for getting the actual meaning by participants. The biggest convenience is the training courses are openly available for everyone to access online, So I would complete the remaining very soon and have an wish to spread this training among the mappers as OSM in Bangladesh.
I shouldn’t skip the fun part of the workshop, the trainers didn’t occupy the days with courses only but also the HOT dinner, visit to the national monument (Monas), tour in car free day etc. The variety of food was really wondering as well as tasty. To me the item Kedai Pelangi (beef ribs bbq) was the best followed by chicken saute. Finally, the restaurant GARUDA, extraordinary and artistic; whoever is visiting Jakarta they should taste the food I Garuda. Our old Jakarta visit was also fun, visiting museum, different street show, shopping from street fair, lemur show and the overall crowd gave me the feelings of unity. I love the people, I love Jakarta.
7 Oct 2015
The World Bank recently released its extensive guide to planning an Open Cities mapping project, which was proudly co-authored by HOT.
The Open Cities Project began two years ago under the World Bank’s Global Facility for Disaster Reduction and Recovery (GFDRR), with initial locations in Nepal, Bangladesh, and Sri Lanka. The aim is to promote open data ecosystems that support disaster risk management in high risk locations.
Of note in the Open Cities guide (available here) is a detailed approach to planning, managing, and reviewing mapping projects in which implementers intend to map a finite area using a specific data model. Using OpenStreetMap as the platform for data collection, the guide explains the methodology of mapping primarily in organizational terms, with less focus on the detailed technical aspects of tools like JOSM and Field Papers. Extensive experience and lessons learned from the first Open Cities projects, as well as HOT activities, will help inform future efforts.
It’s great to see how many variations of OSM mapping projects have emerged in the past years under different auspices and with varying objectives and methodologies. I think Open Cities puts the right focus on one of the core philosophies of all this work, which is to foster buy-in and collaboration among a wide range of actors, as the power and value of open data gains more and more traction.
This book offers insights and practical knowledge for anyone interested in organizing a large mapping project. I’m pleased that HOT has been able to collaborate on this, especially in being able to combine our experience with the important work of GFDRR. And I look forward to the future development of more and more open mapping activities! See the Abstract below:
This guide offers a comprehensive understanding of the design and implementation of an Open Cities mapping project for both practitioners in the field and those interested in a higher-level understanding of the process. The guide’s content is based on experience in implementing the initial Open Cities projects in Bangladesh, Nepal, and Sri Lanka as well as on previous mapping project experience. Where relevant, it provides relevant examples from those projects in the text and full case studies at the end of guide. The Open Cities Project launched its efforts in three cities: Batticaloa, Sri Lanka; Dhaka, Bangladesh; and Kathmandu, Nepal. These cities were chosen for: 1) their high levels of disaster risk; 2) the presence of World Bank-lending activities related to urban planning and disaster management that would benefit from access to better data; and 3) the willingness of government counterparts to participate in and help guide the interventions. Chapter 2, “Project Design and Preparation,” covers how a project design process begins: by identifying partners, clarifying a project’s objectives and scope, assembling a team of managers and mappers, and assessing the necessary resources for mapping. Chapter 3, “Getting Started,” then describes the steps after the initial planning stage: how to locate an appropriate workspace, assess equipment costs, and prepare staff training. Chapter 4, “Implementation and Supervision,” takes a practical look at data collection techniques from both the organizational and technical perspectives. It also addresses common challenges and mechanisms for quality control and reporting. Finally, chapter 5 examines the lessons learned from previous Open Cities projects and considers future improvements to the overall project design.
Antarctic Ozone Hole
In the early 1980s, scientists began to realize that CFCs were creating a thin spot—a hole—in the ozone layer over Antarctica every spring. This series of satellite images shows the ozone hole on the day of its maximum depth each year from 1979 through 2010.
Water Level in Lake Powell
Combined with human demands, a multi-year drought in the Upper Colorado River Basin caused a dramatic drop in the Colorado River’s Lake Powell in the early part of the 2000s. The lake began to recover in the latter part of the decade, but as of 2012, it was still well below capacity.
The shoreline of Cape Cod provides a visual case study in the evolution and dynamic motion of barrier islands and spits.
Antarctic Sea Ice
Because of differences in geography and climate, Antarctica sea ice extent is larger than the Arctic’s in winter and smaller in summer. Since 1979, Antarctica’s sea ice has increased slightly, but year-to-year fluctuations are large.
Arctic Sea Ice
NASA satellites have monitored Arctic sea ice since 1978. Starting in 2002, they observed a sharp decline in sea ice extent.
The state of Rondônia in western Brazil is one of the most deforested parts of the Amazon. This series shows deforestation on the frontier in the northwestern part of the state between 2000 and 2012.
Mountaintop Mining, West Virginia
Based on data from the Landsat satellites, these natural-color images document the growth of the Hobet mine in Boone County, West Virginia, as it expands from ridge to ridge between 1984 and 2013.
Shrinking Aral Sea
A massive irrigation project has devastated the Aral Sea over the past 50 years. These images show the decline of the Southern Aral Sea in the past decade, as well as the first steps of recovery in the Northern Aral Sea.
Recovery at Mt. St. Helens
The devastation of the May 1980 eruption of Mt. St. Helens and the gradual recovery of the surrounding landscape is documented in this series of satellite images from 1979—2013.
Fire in Etosha National Park
Prescribed fires should prevent blazes from raging out of control in one of Namibia’s most prized wildlife preserves.
Green Seasons of Maine
Not many places on Earth have year-round greenery and four distinct seasons. The images in this series show the four seasons of Maine, the most forest-covered state in the U.S.A.
Columbia Glacier, Alaska
Since 1980, the volume of this glacier that spills into the Prince William Sound has shrunk by half. Climate change may have nudged the process along, but mechanical forces have played the largest role in the ice loss.
Drought Cycles in Australia
Droughts have taken a severe toll on croplands in Southeast Australia during the past decade.
Athabasca Oil Sands
The Athabasca Oil Sands are at once a source of oil, of economic growth, and of environmental concern. This series of images shows the growth of surface mines around the Athabasca River from 1984 to the present.
Burn Recovery in Yellowstone
In 1988, wildfires raced through Yellowstone National Park, consuming hundreds of thousands of acres. This series of Landsat images tracks the landscape’s slow recovery through 2011.
This collection features the strongest hurricane, cyclone, or typhoon (from any ocean) during each of the past 11 years, including storms both infamous and obscure.
Seasons of the Indus River
Fed by glaciers in the Himalayas and Karakorams mountains — and by monsoon rains — the Indus River experiences substantial fluctuations every year. Because the river irrigates 18 million hectares of farmland, the landscape changes along with the river.
Urbanization of Dubai
To expand the possibilities for beachfront tourist development, Dubai, part of the United Arab Emirates, undertook a massive engineering project to create hundreds of artificial islands along its Persian Gulf coastline.
The world is getting warmer, whatever the cause. According to an analysis by NASA scientists, the average global temperature has increased by about 0.8°Celsius (1.4° Fahrenheit) since 1880. Two-thirds of the warming has occurred since 1975.
Seasons of Lake Tahoe
Perhaps the most familiar change in our changing world is the annual swing of the seasons. This series of images shows the effects of the seasons on the Lake Tahoe region between 2009 and 2010.
Images of sunspots and UV brightness document the 11-year cycle of solar magnetic activity. The series spans 1999–2010, capturing the most recent solar maximum and minimum, as well as the emergence of solar cycle 24.
Larsen-B Ice Shelf
In early 2002, scientists monitoring daily satellite images of the Antarctic Peninsula watched in amazement as almost the entire Larsen B Ice Shelf splintered and collapsed in just over one month. They had never witnessed such a large area disintegrate so rapidly.
In the years following the Second Gulf War, Iraqi residents began reclaiming the country’s nearly decimated Mesopotamian marshes. This series of images documents the transformation of the fabled landscape between 2000 and 2010.
Yellow River Delta
Once free to wander up and down the coast of the North China Plain, the Yellow River Delta has been shaped by levees, canals, and jetties in recent decades.
El Niño, La Niña, and Rainfall
For many people, El Niño and La Niña mean floods or drought, but the events are actually a warming or cooling of the eastern Pacific Ocean that impacts rainfall. These sea surface temperature and rainfall anomaly images show the direct correlation between ocean temperatures and rainfall during El Niño and La Niña events.
Its very normal to arise question your inside, “why the forest is red and sky is blue in satellite images?” Chances are, you have a camera near you as you read this—in the smart phone in your pocket or on the tablet or computer you’re using to view this blog. Some of you might have a 35 mm film or digital camera nearby. And at some point this week, you probably looked through photos posted by friends or even strangers on the Internet. In our photo-saturated world, it’s natural to think of the images on the Earth Observatory as snapshots from space. But most aren’t.
Though they may look similar, photographs and satellite images are fundamentally different. A photograph is made when light is focused and captured on a light-sensitive surface (such as film or a CCD). A satellite image is created by combining measurements of the intensity of certain wavelengths of light, both visible and invisible to human eyes.
Why does the difference matter? When we see a photo where the colors are brightened or altered, we think of it as artful (at best) or manipulated (at worst). We also have that bias when we look at satellite images that don’t represent the Earth’s surface as we see it. “That forest is red,” we think, “so the image can’t possibly be real.”
In reality, a red forest is just as real as a dark green one. Satellites collect information beyond what human eyes can see, so images made from other wavelengths of light look unnatural to us. We call these images “false-color,” and to understand what they mean, it’s necessary to understand exactly what a satellite image is.
Infrared light renders the familiar unfamiliar. This infrared photograph shows the forests of Yellowstone National Park from Mount Sheridan. (Photograph courtesy National Park Service.)
Satellite instruments gather an array of information about the Earth. Some of it is visual; some of it is chemical (such as gases in the atmosphere); some of it is physical (sensing topography). In fact, remote sensing scientists and engineers are endlessly creative about what they can measure from space, developing satellites with a wide variety of toolsto tease information out of our planet. Some methods are active, bouncing light or radio waves off the Earth and measuring the energy returned; lidar and radar are good examples. The majority of instruments are passive; that is, they record light reflected or emitted by Earth’s surface.
These observations can be turned into data-based maps that measure everything from plant growth or cloudiness. But data can also become photo-like natural-color images or false color images. This article describes the process used to transform satellite measurements into images.
Seeing the Light
So what does a satellite imager measure to produce an image? It measures light that we see and light that we don’t see. Light is a form of energy—also known as electromagnetic radiation—that travels in waves. All light travels at the same speed, but the waves are not all the same. The distance between the top of each wave—the wavelength—is smaller for high-energy waves and longer for low-energy waves.
Visible light comes in wavelengths of 400 to 700 nanometers, with violet having the shortest wavelengths and red having the longest. Infrared light and radio waves have longer wavelengths and lower energy than visible light, while ultraviolet light, X-rays, and gamma rays have shorter wavelengths and higher energy.
Most of the electromagnetic radiation that matters for Earth-observing satellites comes from the Sun. When sunlight reaches Earth, the energy is absorbed, transmitted, or reflected. (Absorbed energy is later re-emittedas lower-energy radiation.) Every surface or object absorbs, emits, and reflects light uniquely depending on its chemical makeup. Chlorophyll in plants, for example, absorbs red and blue light, but reflects green and infrared; this is why leaves appear green. This unique absorption and reflection pattern is called a spectral signature.
Varied land surfaces have distinct spectral signatures. Fresh basalt lava and asphalt reflect different amounts of infrared light, even though they appear similar in visible light. (Photograph ©2012 Robert Simmon. Figure by Robert Simmon, using data from the USGS Digital Spectral Library.)
Like Earth’s surfaces, gases in the atmosphere also have unique spectral signatures, absorbing some wavelengths of electromagnetic radiation and emitting others. Gases also let a few wavelengths pass through unimpeded. Scientists call these “atmospheric windows” for specific wavelengths, and satellite sensors are often tuned to measure light through these windows.
Atmospheric windows are regions of the spectrum where most light penetrates through the atmosphere, allowing satellites to view the Earth’s surface. (Figure adapted fromCasey et al, 2012.)
Some satellite instruments also directly measure the energy emitted by objects. Everything gives off energy, usually in the form of heat (thermal infrared radiation). The hotter an object is, the shorter the peak wavelength it emits. At about 400°C (750° F)—the temperature of an electric stove burner set to high—the emitted light will begin to be visible. The colder an object is, the longer the peak wavelength it emits.
Turning Wavelength Data Into an Image
Satellite instruments carry many sensors that are each tuned to a narrow range, or “band,” of wavelengths (just red or green light, for instance). Viewing the output from just one band is a bit like looking at the world in shades of gray. The brightest spots are areas that reflect or emit a lot of that wavelength of light, and darker areas reflect or emit little (if any).
To make a satellite image, we choose three bands and represent each in tones of red, green, or blue. Because most visible colors can be created by combining red, green, and blue light, we then combine the red, green, and blue-scale images to get a full-color representation of the world.
A natural or “true-color” image combines actual measurements of red, green, and blue light. The result looks like the world as humans see it. (For tips on understanding true-color images, read How to Interpret a Satellite Image on the Earth Observatory.”)
A false-color image uses at least one non-visible wavelength, though that band is still represented in red, green, or blue. As a result, the colors in the final image may not be what you expect them to be. (For instance, grass isn’t always green.) Such false-color band combinations reveal unique aspects of the land or sky that might not be visible otherwise.
This series of Landsat images of southeastern Florida and the Northern Everglades illustrates why you might want to see the world in false color. (A related animation shows how the images were made.) The visible light image shows dark green forest, light green agriculture, brown wetlands, silver urban areas (the city of Miami), and turquoise offshore reefs and shallows. These colors are similar to what you would see from an airplane.
The second image shows the same scene in green, near infrared, and shortwave infrared light. In this false-color band combination, plant-covered land is bright green, water is black, and bare earth ranges from tan to pink. Urban areas are purple. Newly burned farmland is dark red, while older burns are lighter red. Much of the farmland in this area is used to grow sugar cane. Farmers burn the crop before harvest to remove leaves from the canes. Because burned land looks different in this kind of false-color image, it is possible to see how extensively farmers rely on fire in this region.
This false-color view also reveals how water flows through the Northern Everglades. Green islands punctuate the wetlands, which are black and blue. These are tree islands that are hard to distinguish in natural color. Their orientation aligns with the flow of the water, highlighting direction that is not obvious in the natural color image. It is also easier to see the extent of the wetlands against surrounding land, since water is dark in this view and plant-covered land is bright green.
The third image shows the scene in green, red, and near infrared light. Plants are dark red because they reflect infrared light strongly, and the infrared band is assigned to be red. Plants that are growing quickly reflect more infrared, so they are brighter red. That means that this type of false-color image can help us see how well plants are growing and how densely vegetated an area is. Water is black and blue, and urban areas—including Miami, Fort Lauderdale, and West Palm Beach—are silver.
Observing in Visible Light
Data visualizers and remote sensing scientists make true- or false-color images in order to show the features they are most interested in, and they select the wavelength bands most likely to highlight those features.
Blue light (450 to 490 nanometers) is among the few wavelengths that water reflects (the rest are absorbed). Hence, blue bands are useful for seeing water surface features and for spotting the sea- or lake floor in shallow waters. You can see that water reflects some blue light in the above image of Lake Issyk Kul, Kyrgyzstan. Water is lighter in the blue band than it is in either the red or green bands, though the lake is too deep for shallow features to be visible. Manmade creations like cities and roads also show up well in blue light. It is also the wavelength most scattered by particles and gas molecules in the atmosphere, which is why the sky is blue.
Green light (490 to 580 nanometers) is useful for monitoringphytoplankton in the ocean and plants on land. The chlorophyll in these organisms absorbs red and blue light, but reflects green light. Sediment in water also reflects green light, so a muddy or sandy body of water will look brighter because it is reflecting both blue and green light.
Red light (620 to 780 nanometers) can help distinguish minerals and soils that contain a high concentration of iron or iron oxides, making it valuable for studying geology. In the above image, for example, the exposed ground around Lake Issyk Kul varies in tone from pale tan to orange based on the mineral content of the soil. Since chlorophyll absorbs red light, this band is commonly used to monitor the growth and health of trees, grasses, shrubs, and crops. Red light can also help distinguish between different types of plants on a broad scale.
Observing in Infrared
Near infrared (NIR) light includes wavelengths between 700 and 1,100 nanometers. Water absorbs NIR, so these wavelengths are useful for discerning land-water boundaries that are not obvious in visible light. The below image shows the near infrared view of the Piqiang Fault, China. Stream beds and the wetland in the upper left corner are darker than the surrounding arid landscape because of their water content. (See a natural color view of the scene here.) Plants, on the other hand, reflect near infrared light strongly, and healthy plants reflect more than stressed plants. Finally, near infrared light can penetrate haze, so including this band can help discern the details in a smoky or hazy scene.
Shortwave infrared (SWIR) light includes wavelengths between 1,100 and 3,000 nanometers. Water absorbs shortwave infrared light in three regions: 1,400, 1,900, and 2,400 nanometers. The more water there is, even in soil, the darker the image will appear at these wavelengths. This means SWIR measurements can help scientists estimate how much water is present in plants and soil. Shortwave-infrared bands are also useful for distinguishing between cloud types (water clouds versus ice clouds) and between clouds, snow, and ice, all of which appear white in visible light. Newly burned land reflects strongly in SWIR bands, making them valuable for mapping fire damage. Active fires, lava flows, and other extremely hot features “glow” in the shortwave-infrared part of the spectrum. In the image below, different types of sandstone and limestone make up the mountains around China’s Piqiang Fault. Each rock type reflects shortwave infrared light differently, making it possible to map out geology by comparing reflected SWIR light. Enhancing the subtle differences between the 3 bands of reflected shortwave infrared light used to make this image gives each mineral a distinctive, bright color.
Midwave Infrared (MIR) ranges from 3,000 to 5,000 nanometers and is most often used to study emitted thermal radiation in the dark of night. Midwave infrared energy is also useful in measuring sea surface temperature, clouds, and fires. The images below contrast a visible-lightnighttime view of the Niger River Delta with the same view in midwave infrared; both images are from the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi-NPP satellite. The day-night band shows visible light—the lights of Port Harcourt and Benin City, bright gas flares, and moonlight reflected off clouds. The midwave infrared image shows emitted thermal radiation. The warmer ocean and river are pale, while the cold land and clouds are dark, and the hot gas flares are bright.
Infrared (IR) light—specifically between 6,000 to 7,000 nanometers—iscritical for observing water vapor in the atmosphere. Though water vapor makes up just 1 to 4 percent of the atmosphere, it is an important greenhouse gas. It is also the basis for clouds and rainfall. Water vapor absorbs and re-emits energy in this range, so infrared satellite observations can be used to track water vapor. Such observations are integral to weather observations and forecasts.
Mid-infrared (7µm) and thermal-infrared (12 µm) images showing water vapor (left), and temperature (right). The images are inverted to better show clouds: cold areas are light and warm areas are dark. (NASA/NOAA images by Robert Simmon, using data from theGOES Project Science Team.)
Thermal or longwave infrared (TIR or LWIR) light includes wavelengths between 8,000 and 15,000 nanometers. Most of the energy in this part of the spectrum is emitted (not reflected) by the Earth as heat, so it can be observed both day and night. Thermal infrared radiation can be used to gauge water and land surface temperatures; this makes it particularly useful for geothermal mapping and detection of heat sources like active fires, gas flares, and power plants. Scientists also use TIR to monitor crops. Actively growing plants cool the air above them by releasing water through evapotranspiration, so TIR light helps scientists assess how much water the plants are using.
How to Interpret Common False Color Images
Though there are many possible combinations of wavelength bands, the Earth Observatory typically selects one of four combinations based on the event or feature we want to illustrate. For instance, floods are best viewed in shortwave infrared, near infrared, and green light because muddy water blends with brown land in a natural color image. Shortwave infrared light highlights the difference between clouds, ice, and snow, all of which are white in visible light.
Our four most common false-color band combinations are:
- Near infrared (red), green (blue), red (green). This is a traditional band combination useful in seeing changes in plant health.
- Shortwave infrared (red), near infrared (green), and green (blue), often used to show floods or newly burned land.
- Blue (red), two different shortwave infrared bands (green and blue). We use this to differentiate between snow, ice, and clouds.
- Thermal infrared, usually shown in tones of gray to illustrate temperature.
Near infrared, red, green
One of our most frequently published combinations uses near infrared light as red, red light as green, and green light as blue. In this case, plants reflect near infrared and green light, while absorbing red. Since they reflect more near infrared than green, plant-covered land appears deep red. The signal from plants is so strong that red dominates the false-color view of Algeria below. Denser plant growth is darker red. This band combination is valuable for gauging plant health.
Cities and exposed ground are gray or tan, and clear water is black. In the image below, the water is muddy, and the sediment reflects light. This makes the water look blue. Images from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and from the early Landsats are often shown in this band combination because that’s what the instruments measured.
Shortwave infrared, near infrared, and green
The most common false-color band combination on the Earth Observatory uses the shortwave infrared (shown as red), the near infrared (green), and the green visible band (shown as blue).
Water absorbs all three wavelengths, so it is black in this band combination. In the below false-color image of Algeria, however, water is blue because it is full of sediment. Sediment reflects visible light, which is assigned to look blue in this band combination. This means that both sediment-laden water and saturated soil will appear blue. Because water and wet soil stand out in this band combination, it is valuable formonitoring floods. Saturated soil will also appear blue. Ice clouds, snow, and ice are bright blue, since ice reflects visible light and absorbs infrared. This helps distinguish water from snow and ice; it also distinguishes clouds made up mostly of liquid water or ice crystals.
Newly burned land reflects shortwave infrared light and appears red in this combination. Hot areas like lava flows or fires are also bright red or orange. Exposed, bare earth generally reflects shortwave infrared light and tends to have a red or pink tone. Urban areas are usually silver or purple, depending on the building material and how dense the area is.
Since plants reflect near infrared light very strongly, vegetated areas are bright green. The signal is so strong that green often dominates the scene. Even the sparse vegetation in Algeria’s desert landscape stands out as bright green spots in the above image.
Blue, shortwave infrared
Occasionally, the Earth Observatory will publish a band combination that assigns blue light to be red and two different shortwave infrared bands to green and blue. This band combination is especially valuable in distinguishing snow, ice, and clouds. Ice reflects more blue light than snow or ice clouds. Ice on the ground will be bright red in this false color, while snow is orange, and clouds range from white to dark peach.
The Earth Observatory also uses thermal infrared measurements to show land temperatures, fire areas, or volcanic flows, but most of the time, these are published as grayscale images. Occasionally, the thermal features of interest will be layered on top of a true-color or grayscale image, particularly in the case of a fire or volcano.
You can explore the way different band combinations highlight different features by using a browse tool called Worldview, which displays data from many different imagers, including Aqua and Terra MODIS. Click on “add layers” and then select one of the alternate band combinations (1-2-1, 3-6-7, or 7-2-1). The site also provides descriptions of common MODIS band combinations.
You can also explore false color imagery with Landsat. See a few examples with a description in the Landsat 7 Compositor, or watch thisanimation of the Florida Everglades in three different band combinations. You can also make your own Landsat images and experiment with band combinations by using software like Adobe Photoshop or ImageJ.Download data for free from the U.S. Geological Survey, then follow the instructions for Photoshop or ImageJ.
- Learn more
- NASA Take a tour of the electromagnetic spectrum.
- NASA’s Earth Observatory (2013, November 18) How to interpret a Satellite Image: Five Tips and Strategies.
- NASA’s Earth Observing System Our Changing Planet.
- Natural Resources Canada (2007, September 25) Fundamentals of Remote Sensing.
“Years,” created by a pair of producers who worked together on “60 Minutes,” is billed as a documentary series about climate change, but it’s constructed like a newsmagazine, albeit a nine-hour newsmagazine about a single subject. Each episode weaves together several reports, some done by journalists like Lesley Stahl or Chris Hayes, but most by a roster of celebrities that includes Mr. Ford, Don Cheadle, Matt Damon, Olivia Munn, Jessica Alba and Arnold Schwarzenegger.
You might assume that this meant sacrificing some measure of journalistic credibility in the quest for attention. But the truth is, Mr. Ford and Mr. Cheadle are just as good as any seasoned television correspondent at the newsmagazine drill: Parachute in, digest a lot of material gathered by producers and researchers, ask reasonably intelligent questions, make small talk, look concerned. On the basis of the first episode, they’re probably better.
The series, whose executive producers include Hollywood big shots like James Cameron, Jerry Weintraub and Mr. Schwarzenegger, is a well-made and, by the looks of it, expensive product, from the crisp graphics to the travel budgets to the jittery opening credits that recall Showtime’s “Homeland.” Conveying information, à la “An Inconvenient Truth,” often takes a back seat to engaging the casual viewer. A report on wildfires entails Mr. Schwarzenegger’s going into action on the fire line; a lot of screen time is given to Mr. Ford’s alternately soulful and acerbic reaction shots.
Of the early reports, Mr. Ford’s is the most successful, largely because it’s the easiest to grasp: huge, smoky fires = deforestation = more carbon dioxide. Mr. Cheadle’s is more complicated, linking climate change to job loss in Texas and taking a long detour into science versus religion.
The most ambitious segment is reported by Thomas L. Friedman, a columnist for The New York Times, who seeks to investigate whether a long drought in Syria was one of the primary causes of that country’s civil war. It’s probably too big and complex a question for 20 minutes of screen time, parts of which are spent on issues of security and border crossings. When it comes to livening up environmental reporting, you can’t beat Mr. Ford in a boat in Borneo.
Hollywood celebrities and respected journalists endorsed in a common thing “Climate Change” and span the globe to explore the issues of climate change and cover intimate stories of human triumph and tragedy. Check out James Cameron’s eye-opening documentary series on climate change, Years of Living Dangerously:
The impacts of global warming are likely to be “severe, pervasive and irreversible”, a major report by the UN has warned. Scientists and officials meeting in Japan say the document is the most comprehensive assessment to date of the impacts of climate change on the world.
Members of the UN’s climate panel say it provides overwhelming evidence of the scale of these effects. Natural systems now bear the brunt, but a growing impact on humans is feared. Our health, homes, food and safety are all likely to be threatened by rising temperatures, the summary says. The report was agreed after almost a week of intense discussions here in Yokohama, which included concerns among some authors about the tone of the evolving document.
This is the second of a series from the Intergovernmental Panel on Climate Change (IPCC) due out this year that outlines the causes, effects and solutions to global warming. This latest Summary for Policymakers document highlights the fact that the amount of scientific evidence on the impacts of warming has almost doubled since the last report in 2007. Be it the melting of glaciers or warming of permafrost, the summary highlights the fact that on all continents and across the oceans, changes in the climate have caused impacts on natural and human systems in recent decades.
IPCC Chairman Rajendra Pachauri said “the findings in the report were “profound”. Nobody on this planet is going to be untouched by the impacts of climate change” . In the words of the report, “increasing magnitudes of warming increase the likelihood of severe, pervasive and irreversible impacts”.
“Nobody on this planet is going to be untouched by the impacts of climate change,” IPCC chairman Rajendra Pachauri told journalists at a news conference in Yokohama. Dr Saleemul Huq, a convening lead author on one of the chapters, commented: “Before this we thought we knew this was happening, but now we have overwhelming evidence that it is happening and it is real.”
Michel Jarraud, secretary-general of the World Meteorological Organization, said that, previously, people could have damaged the Earth’s climate out of “ignorance”. “Now, ignorance is no longer a good excuse,” he said.
Mr Jarraud said the report was based on more than 12,000 peer-reviewed scientific studies. He said this document was “the most solid evidence you can get in any scientific discipline”.
The report details significant short-term impacts on natural systems in the next 20 to 30 years. It details five reasons for concern that would likely increase as a result of the warming the world is already committed to. These include threats to unique systems such as Arctic sea ice and coral reefs, where risks are said to increase to “very high” with a 2C rise in temperatures.
The summary document outlines impacts on the seas and on freshwater systems as well. The oceans will become more acidic, threatening coral and the many species that they harbour. On land, animals, plants and other species will begin to move towards higher ground or towards the poles as the mercury rises. Humans, though, are also increasingly affected as the century goes on.
Food security is highlighted as an area of significant concern. Crop yields for maize, rice and wheat are all hit in the period up to 2050, with around a tenth of projections showing losses over 25%. After 2050, the risk of more severe yield impacts increases, as boom-and-bust cycles affect many regions. All the while, the demand for food from a population estimated to be around nine billion will rise.
Many fish species, a critical food source for many, will also move because of warmer waters. In some parts of the tropics and in Antarctica, potential catches could decline by more than 50%. “This is a sobering assessment,” said Prof Neil Adger from the University of Exeter, another IPCC author. “Going into the future, the risks only increase, and these are about people, the impacts on crops, on the availability of water and particularly, the extreme events on people’s lives and livelihoods.”
People will be affected by flooding and heat related mortality. The report warns of new risks including the threat to those who work outside, such as farmers and construction workers. There are concerns raised over migration linked to climate change, as well as conflict and national security.
Report co-author Maggie Opondo of the University of Nairobi said that in places such as Africa, climate change and extreme events mean “people are going to become more vulnerable to sinking deeper into poverty”.
While the poorer countries are likely to suffer more in the short term, the rich won’t escape. “The rich are going to have to think about climate change. We’re seeing that in the UK, with the floods we had a few months ago, and the storms we had in the US and the drought in California,” said Dr Huq.
“These are multibillion dollar events that the rich are going to have to pay for, and there’s a limit to what they can pay.” But it is not all bad news, as the co-chair of the working group that drew up the report points out.
“I think the really big breakthrough in this report is the new idea of thinking about managing climate change as a problem in managing risks,” said Dr Chris Field.
“Climate change is really important but we have a lot of the tools for dealing effectively with it – we just need to be smart about it.” There is far greater emphasis to adapting to the impacts of climate in this new summary. The problem, as ever, is who foots the bill?
“It is not up to IPCC to define that,” said Dr Jose Marengo, a Brazilian government official who attended the talks. “It provides the scientific basis to say this is the bill, somebody has to pay, and with the scientific grounds it is relatively easier now to go to the climate negotiations in the UNFCCC (United Nations Framework Convention on Climate Change) and start making deals about who will pay for adaptation.”
We’ve had several opportunities to refine GeoGit workflows in real-world situations, but among the most fulfilling was assisting with the response to Typhoon Yolanda (also known as Typhoon Haiyan) in the Philippines. It was the strongest cyclone to make landfall in recorded history, resulting in an urgent need to share data about the damage to help with recovery and reconstruction.
To meet this need, the Global Facility for Disaster Reduction and Recovery (GFDRR) teamed up with the American Red Cross and the Humanitarian OpenStreetMap Team (HOT) and launched an open data platform to gather and share data about Yolanda. The ROGUE project, which helps develop GeoGit, was asked to help manage and distribute extracts of OpenStreetMap data. As described below, we created a powerful bidirectional workflow with OpenStreetMap that enabled us not only to derive and publish up-to-date data for response and recovery efforts but also to contribute back to OpenStreetMap.
Importing OpenStreetMap Data
Thanks largely to HOT’s efforts, a large number of damaged and destroyed buildings were mapped into OpenStreetMap using commercial satellite imagery distributed under the Next View license or the State Department’s Imagery to the Crowd program. GeoGit was used to extract data from OpenStreetMap and transform it into formats more useful to traditional GIS applications.
While GeoGit supports reading and writing from OpenStreetMap data in a variety of ways, the Yolanda efforts started with the daily
.pbf downloads from geofabrik that were then imported into a GeoGit repository using the
geogit osm import command. This initial import command brings the data into the standard node and way layers in a GeoGit repository with all of the OpenStreetMap tags attached to each feature. During the initial few imports we were able to find and solve some performance bottlenecks that reduced the import time from over an hour to just a few minutes.
Mapping to a Schema
Once imported, the
geogit osm map command was used to map the data into more traditional sets of layers, using the tags as attributes. A JSON mapping file specifies which tags were used to separate out the features into layers and assign attributes to each feature. The key mapping involved taking nodes and ways tagged with
typhoon:damage=yes and translating those into
damage_line layers with associated attributes. Over the course of mapping the data, we were able to make improvements to the codebase and workflow in several areas.
Sharing Up-to-Date Data
Once the repository had the data organized into the right schema, we used the
geogit export pg command to load snapshots into a PostGIS database and serve them to the web. Since we wanted to provide the most current data, we used the
geogit osm apply-diff command to update the repository with daily updates from OSM planet. This ensured that our repository always reflected recent edits and that layers were exported and updated on the site.
Contributing Back to OpenStreetMap
In addition to staying in sync with the global OpenStreetMap planet, GeoGit made it possible to change layers in our repository and apply them back to OpenStreetMap — enabling a fully round-trip or bidirectional workflow. For example, we found many misspellings or inconsistent use of tags in the data where able to correct them. We fixed these issues against our PostGIS snapshot, applied the changes back to the repository, generated a changeset using the
geogit osm create-changeset command, and finally uploaded the changeset using JOSM. In the process, we were once again able to improve these functions based on real-world usage.
These tools enable a powerful bidirectional workflow with OpenStreetMap. We demonstrated that data can be imported from OpenStreetMap into a local repository, mapped into a set of layers with a well-defined schema, and served via OGC services. Repositories can be kept in sync with OpenStreetMap over time and, if changes are made to the local repository, GeoGit enables us to produce changesets that can be contributed to the global OSM dataset. Using this same workflow, it becomes possible for users to effectively work with a local extract of OSM data for both making and applying local edits as well as incorporating upstream changes.
For more to know about all these above we have to wait until 13th April where Jeff Johnson will present more about GeoGit-based OpenStreetMap import workflows at State of the Map US.