Several of the 772 new US Topo quadrangles for Mississippi now display parts of the Natchez Trace National Scenic Trail and other selected public trails. Further significant additions to the new quadrangles include map symbol redesign, enhanced railroad information and new road source data. For Gulf Coast residents, recreationalists and visitors who want to explore the featured Mississippi trails by biking, hiking, horseback or other means, the new trail features on the US Topo maps will be useful.
Historically, the 450-mile foot trail that became known as the Natchez Trace was the lifeline through the Old Southwest. The Old Natchez Trace footpath ran through Choctaw and Chickasaw lands, connecting Natchez, Mississippi, to Nashville, Tennessee. Today, the current trail network consists of five separate trails totaling more than 60 miles.
"The inclusion of the Natchez Trace National Scenic Trail onto the US Topo maps will be an excellent tool for publicizing the trail to visitors,” said Greg Smith, Natchez Trace National Scenic Trail Coordinator for the National Park Service. “ The trail traverses three states and provides an opportunity for users to experience the unique cultural and natural aspects of the Old Natchez Trace."
The USGS partnered with the National Park Service to incorporate the trail data onto the Mississippi US Topo maps. The Natchez Trace National Scenic Trail joins the Ice Age National Scenic Trail, the Pacific Northwest National Scenic Trail the North Country National Scenic Trail, Pacific Crest National Scenic Trail, and the Arizona National Scenic Trail as being featured on the new US Topo quads. The USGS plans to eventually include all National Scenic Trails in The National Map products.
Some of the other data for new trails on the maps is provided to the USGS through a nationwide “crowdsourcing” project managed by the International Mountain Biking Association (IMBA). This unique crowdsourcing venture has increased the availability of trail data available through The National Map mobile and web apps, and the revised US Topo maps.
During the past two years the IMBA, in a partnership with the MTB Project, has been building a detailed national database of trails. This activity allows local IMBA chapters, IMBA members, and the public to provide trail data and descriptions through their website. MTB Project and IMBA then verify the quality of the trail data provided, ensure accuracy and confirm the trail is legal.
These new maps replace the first edition US Topo maps for the Magnolia State and are available for free download from The National Map, the USGS Map Locator & Downloader website, or several other USGS applications.
To compare change over time, scans of legacy USGS topo maps, some dating back to the late 1800s, can be downloaded from the USGS Historical Topographic Map Collection
For more information on US Topo maps: http://nationalmap.gov/ustopo/Updated 2015 version of Tupelo, Mississippi US Topo quadrangle with orthoimage turned on. (1:24,000 scale). (high resolution image 1.4 MB) Scan of the 1921 legacy topographic map quadrangle of the Tupelo, Mississippi area from the USGS Historic Topographic Map Collection. (high resolution image 2 MB) Updated 2015 version of Tupelo, Mississippi US Topo quadrangle with orthoimage turned off. (1:24,000 scale) (high resolution image 1.2 MB)
The National Trails System was established by Act of Congress in 1968. The Act grants the Secretary of Interior and the Secretary of Agriculture authority over the National Trails System. The Act defines four types of trails. Two of these types, the National Historic Trails and National Scenic Trails, can only be designated by Act of Congress. National scenic trails are extended trails located as to provide for maximum outdoor recreation potential and for the conservation and enjoyment of nationally significant scenic, historic, natural, and cultural qualities of the area through which such trails may pass.
There are 11 National Scenic Trails:
- Appalachian National Scenic Trail
- Pacific Crest National Scenic Trail
- Continental Divide National Scenic Trail
- North Country National Scenic Trail
- Ice Age National Scenic Trail
- Potomac Heritage National Scenic Trail
- Natchez Trace National Scenic Trail
- Florida National Scenic Trail
- Arizona National Scenic Trail
- New England National Scenic Trail
- Pacific Northwest National Scenic Trail
A team of four climbers has recently returned from the highest point in North America with new survey data to determine a more precise summit height of Mount McKinley. It is anticipated the new elevation finding will be announced in late August.
The ability to establish a much more accurate height has grown with advances in surveying technologies since 1953 when the last official survey of Mount McKinley was recorded. The new elevation will eventually replace the formerly accepted elevation of 20,320 feet.
”Determining an updated elevation for the summit of Mount McKinley presents extraordinary challenges,” said Suzette Kimball, acting director of the USGS. “The USGS and its partners are using the best available modern GPS survey equipment and techniques to ensure the new elevation will be determined with a high level of accuracy and confidence.”
Unique circumstances and variables such as the depth of the snow pack and establishing the appropriate surface that coincides with mean sea level must be taken into account before the new Mount McKinley elevation can be determined.
In 2013, an elevation was calculated for Mount McKinley using a technology known as Interferometric Synthetic Aperture Radar (ifsar). The 2013 elevation was slightly lower than the summit’s 20,320 foot height. Ifsar is an extremely effective tool for collecting map data in challenging areas such as Alaska, but it does not provide precise spot or point elevations. This new survey used GPS instruments that were placed directly on the summit to measure a specific point on the surface, thus giving a more defined spot elevation.
The USGS, along with NOAA’s National Geodetic Survey (NGS), and the University of Alaska Fairbanks (UAF), are the primary partners supporting the survey of McKinley’s summit. The survey party included three GPS experts and mountaineers from CompassData (a subcontractor for Dewberry), and a scientist/climber from UAF’s Geophysical Institute.
Now that the data collection expedition is completed, the NGS, UAF, USGS and CompassData are in the process of analyzing the data.
"CompassData was honored to help the USGS and NOAA on this nationally important project,” said Blaine Horner, a member of the climbing team. “Our experience surveying around the world put us in a unique position to perform this work."
The team began their ascent, with the needed scientific instruments in tow, on June 16. With diligent work and mostly favorable weather, the team safely returned to their starting point ahead of schedule.
"We had nearly perfect weather on the mountain,” said Tom Heinrichs, Director of the UAF Geographic Information Network of Alaska and part of the climbing team. “The logistics on the mountain all went well. The summit survey was successful and our preliminary look at the data indicates we will get a good solution for the summit elevation."
Mount McKinley is part of Denali National Park. The Park hosts more than 530,000 visitors each year, with about 1,200 who attempt to climb Mount McKinley. In a typical year, about half of those who begin a McKinley climb reach the summit. The six million acre park will celebrate its 100th anniversary in 2017. The mountain was first summited in 1913.Agustin (Udi) Karriere (front) and Rhett Foster from CompassData establishing the 11,000 foot camp, preparing to move to the next camp and summit ascent. (Photo: Tom Heinrichs, UAF) (Larger image) Rhett Foster from CompassData on a ridge leading to the 17,000 foot base camp. (Photo: Tom Heinrichs, UAF) (Larger image) Tom Heinrichs from the University of Alaska Fairbanks and Agustin (Udi) Karriere from CompassData traveling low on the mountain towards the next base camp, towing needed science and camp equipment. (Photo: Rhett Foster, CompassData) (Larger image) On top of North America! Blaine Horner from CompassData poses with GPS equipment on the top of Mount McKinley. (Photo: Agustin Karriere, CompassData) (Larger image)
Aerial photograph of Kwajalein Atoll showing its low-lying islands and coral reefs. (High resolution image) Aerial photograph of Kwajalein Atoll showing its low-lying islands and coral reefs. (High resolution image)
SANTA CRUZ, Calif. — Coral reefs, under pressure from climate change and direct human activity, may have a reduced ability to protect tropical islands against wave attack, erosion and salinization of drinking water resources, which help to sustain life on those islands. A new paper by researchers from the Dutch independent institute for applied research Deltares and the U.S. Geological Survey gives guidance to coastal managers to assess how climate change will affect a coral reef’s ability to mitigate coastal hazards.
About 30 million people are dependent on the protection by coral reefs as they live on low-lying coral islands and atolls. At present, some of these islands experience flooding due to wave events a few times per decade. It is expected that this rate of flooding will increase due to sea level rise and coral reef decay, as the remaining dead corals are generally smoother in structure, and do less to dissipate wave energy. Loss of coral cover not only causes increased shoreline erosion but also affects the sparse drinking water resources on these islands, which may eventually make these islands uninhabitable. In order to prevent or mitigate these impacts, coastal managers need know to what extent their reef system may lose its protective function so that they can take action. The new study gives guidance on a local reef’s sensitivity to change. The new research has been accepted for publication in “Geophysical Research Letters,” a journal of the American Geophysical Union.
To gain insight into effects of changing conditions on coral reefs, the study authors used Xbeach (an open-source wave model). The computer model was first validated using field measurements obtained on the Kwajalein Atoll in the Marshall Islands in the Pacific Ocean, and was then used to investigate what the effects on water levels, waves, and wave-driven runup would be if certain reef properties change. Reef roughness, steepness, width and the total water level on the reef platform are all important factors for coastal managers to consider when planning mitigating measures.
The results suggest that coasts fronted by relatively narrow reefs with steep faces and deeper, smoother reef flats are expected to experience the highest wave runup and thus potential for island flooding. Wave runup increases for higher water levels (that are expected with sea level rise), higher waves, and lower bed roughness (as coral degrades and becomes smoother), which are all expected effects of climate change. Rising sea levels and climate change will have a significant negative impact on the ability of coral reefs to mitigate the effects of coastal hazards in the future.
The research paper, “The influence of coral reefs and climate change on wave-driven flooding of tropical coastlines,” is published as an open-access paper and available online.
Quataert, E., C. Storlazzi, A. van Rooijen, O. Cheriton, and A. van Dongeren (2015), The influence of coral reefs and climate change on wave-driven flooding of tropical coastlines, Geophysical Research Letters, 42, doi:10.1002/2015GL064861
CORAM, N.Y. -- A new study that looked in part at how damage estimates evolve following a storm puts the total amount of building damage caused by Hurricane Sandy for all evaluated counties in New York at $23 billion. Estimates of damage by county ranged from $380 million to $5.9 billion.
The U.S. Geological Survey study, done in cooperation with the Federal Emergency Management Agency, marks the first time the agency has done this type of analysis and cost estimation for a coastal storm.
"We looked at how estimates of building damage change depending on the amount of information available at the time of the estimate, looking at three time periods -- storm landfall, two weeks later, and then three months later," said Chris Schubert, a USGS hydrologist and lead author of the study. "What we found was that the biggest jump in estimate reliability comes between the initial estimate and the two-week mark, but that the additional information available three months after an event greatly help refine the estimates even further."
The USGS researcher called the study a "proof of concept" that really showcased the value of gathering storm data before and after a storm.
"FEMA funded the sensor placement we did prior to the storm and our assessment of how high the water reached after the storm," Schubert said. "The results from this new study demonstrated how the additional resolution and accuracy of flood depictions resulting from these efforts greatly improved the damage estimates."
Damage estimates can be used by FEMA and other stakeholders to help prioritize relief and reconstruction efforts following a storm. The results can also assist with resiliency planning that helps communities prepare for future storms.
The researchers came up with the estimates by using census data and FEMA’s HAZUS modeling software program. The HAZUS program is used to estimate potential loss from disasters such as earthquakes, wind, hurricanes and floods. This program allows for an assessment of building loss on a block-by-block level.
Hurricane Sandy’s impact was the first time in recent memory, and record, that coastal water levels had reached the heights they attained in many places in the state of New York. Flood effects of Hurricane Sandy, in comparison to those from Tropical Storm Irene in 2011, were significantly more extensive, with most water levels rising at least 2.5 feet higher than in the 2011 storm.
With the latest USGS analysis, a comprehensive picture of the magnitude of Sandy’s impact is now available. Without the sensor placement before the storm, and assessment of high-water marks after, this level of understanding wouldn’t be possible.
"This is the first time USGS has done this type of analysis and cost estimation for a coastal storm," said Schubert. "The effort incorporates what we learned from previous storms going back to Katrina, and the storm-tide information we provided to FEMA in the immediate aftermath of Sandy is one of the building blocks for this research. The additional fidelity of the damage estimate underscores the tremendous value of the dataset for this storm."
Interpretation of storm-tide data from a variety of tools such as tide gauges, stream gauges, and temporary sensors combined with high-water marks showed the extreme nature of storm-tide flooding and, at some sites, the severity and arrival time of the storm surge. Storm surge is the height of water above the normal astronomical tide level due to a storm. Storm tide is the storm surge in addition to the regular tide.
"Timing matters, though every storm is different," said Schubert. "Throughout southeastern New York, we saw that timing of the surge arrival determined how high the storm tide reached. The worst flooding impacts occurred along the Atlantic Ocean-facing parts of New York City and western Long Island, where the peak storm surge arrived at high tide. So the resulting storm tide was five to six feet higher than it would have been had the peak surge arrived at low tide."
The new research is available online in, Analysis of Storm-Tide Impacts from Hurricane Sandy in New York, SIR 2015-5036, by C.E. Schubert, R. Busciolano, P.P. Hearn Jr., A.N. Rahav, R. Behrens, J. Finkelstein, J. Monti Jr., and A. E. Simonson. It examined damage estimates from those counties with depictions of flood extent available from FEMA and the National Hurricane Center.
The USGS is also conducting a study in New Jersey that examines similar topics, including the estimated flood frequency of documented peak storm-tide elevations, comparisons of Sandy to historic coastal storms, the timing of storm surge, and changes in HAZUS damage estimates with the use of USGS sensor and high-water-mark data. That study is expected to be completed and released later this year.
ISLAND OF HAWAI‘I, Hawaii — Hawai‘i, the name alone elicits images of rhythmic traditional dancing, breathtaking azure sea coasts and scenes of vibrant birds flitting through lush jungle canopy. Unfortunately, the future of many native Hawaiian birds looks grim as diseases carried by mosquitoes are due to expand into higher elevation safe zones.
A new study published in Global Change Biology, by researchers at the U.S. Geological Survey and the University of Wisconsin-Madison, assesses how global climate change will affect future malaria risk to native Hawaiian bird populations in the coming century.
Mosquito-carried diseases such as avian pox and avian malaria have been devastating native Hawaiian forest birds. A single mosquito bite can transfer malaria parasites to a susceptible bird, where the death rate may exceed 90 percent for some species. As a result, many already threatened or endangered native birds now only survive in disease-free refuges found in high-elevation forests where mosquito populations and malaria development are limited by colder temperatures. Unlike continental bird species, island birds cannot move northward in response to climate change or increased disease stressors, but must adapt or move to less hospitable habitats to survive.
“We knew that temperature had significant effects on mosquitoes and malaria, but we were surprised that rainfall also played an important role,” said USGS Wisconsin Cooperative Wildlife Research Unit scientist Michael Samuel. “Additional rainfall will favor mosquitoes as much as the temperature change.”
With warming temperatures, mosquitoes will move farther upslope and increase in number. The authors expect high-elevation areas to remain mosquito-free, but only until mid-century when mosquito-friendly temperatures will begin to appear at higher elevations. Future increases in rainfall will likely benefit the mosquitoes as well.
Scientists know that historically, malaria has caused bird extinctions, but changing climates could affect the bird-mosquito-disease system in unknown ways. “We wanted to figure out how climate change impacts birds in the future,” said Wei Liao, post-doctorate at University of Wisconsin-Madison and lead author of the article.
As more mosquitoes move up the mountainside, disease-free refuges will no longer provide a safe haven for the most vulnerable species. The rate of disease infection is likely to speed up as the numbers of mosquitoes increase and more diseased birds become hosts to the parasites, continuing the cycle of infection to healthy birds.
Researchers conclude that future global climate change will cause substantial decreases in the abundance and diversity of remaining Hawaiian bird communities. Without significant intervention many native Hawaiian species, like the scarlet ‘I‘iwi with its iconic curved bill, will suffer major population declines or extinction due to increasing risk from avian malaria during the 21st century.
There is hope for the birds. Because these effects are unlikely to appear before mid-century, natural resource managers have time to implement conservation strategies to protect these unique species from further decimation. Land managers could work toward preventing forest bird number declines by restoring and improving habitat for the birds, reducing mosquitoes on a large scale and controlling predators of forest birds.
“Hawaiian forest birds are some of the most threatened forest birds in the world,” said Samuel. “They are totally unique to Hawai‘i and found nowhere else. They are also important to the Hawaiian culture. And at this point, we still don’t fully understand what role they play as pollinators and in forest dynamics.”
The article, “Will a Warmer and Wetter Future Cause Extinction of Native Hawaiian Forest Birds?” can be found in the online edition of Global Change Biology.
The work was supported by the Department of Interior Pacific Islands Climate Science Center, which is managed by the USGS National Climate Change and Wildlife Science Center. The center is one of eight that provides scientific information to help natural resource managers respond effectively to climate change.
ANCHORAGE, Alaska — The U.S. Geological Survey today released the North Pacific Pelagic Seabird Database — a massive online resource compiling the results of 40 years of surveys by biologists from the United States, Canada, Japan and Russia. The database documents the abundance and distribution of 160 seabird and 41 marine mammal species over a 10 million-square-mile region of the North Pacific.
“The database offers a powerful tool for analysis of climate change effects on marine ecosystems of the Arctic and North Pacific, and for monitoring the impact of fisheries, vessel traffic and oil development on marine bird communities over a vast region,” said Dr. John Piatt, head of the Seabird and Forage Fish Ecology Research Program at the USGS Alaska Science Center. “It also creates an unprecedented opportunity to study the biogeography and marine ecology of dozens of species of seabirds and marine mammals throughout their range in continental shelf waters of the United States.”
Hundreds of scientists and observers conducted surveys, gathering data on more than 350,000 transects ranging from the Channel Islands of southern California westward to the coast of South Korea, and from the Hawaiian Islands northward to the North Pole. The majority of data collection occurred over the U.S. continental shelves stretching from California to Arctic Alaska, where concerns over the possible impact of human activities at sea have long fueled wildlife research and monitoring efforts.
The surveys were conducted over four decades as part of focused studies, for various purposes and in specific regions within the North Pacific. Hundreds of observers from dozens of international, federal and state wildlife agencies, universities and consulting companies contributed data. Because similar observational methods were used, the data could be compiled into a single database, shedding light on broader patterns of seabird distribution and abundance.
USGS scientists started compiling the data into the NPPSD in 2001 and published the first version in 2005. This is the first time the database has been made available online. The current version includes surveys conducted in the last decade and from additional regions. The compilation of data from surveys spanning 40 years makes the NPPSD one of the largest marine wildlife censuses ever conducted in terms of the number of animals observed and spatial extent of the survey area.
“Contributors to the NPPSD can now examine large-scale phenomena that were previously impossible for individual studies to assess because they were conducted on smaller temporal and spatial scales,” said Dr. Gary Drew, database manager for the Seabird and Forage Fish Ecology Research Program at the USGS Alaska Science Center.
The value of the NPPSD for understanding the ecology of the North Pacific and the impacts of human activities in this region has just begun to be realized. Recent analyses using NPPSD data included a risk analysis of shipping traffic on seabirds in the heavily traveled Aleutian Islands conducted by the U.S. Fish and Wildlife Service, and a study commissioned by the National Audubon Society to identify “Important Bird Areas” from California to Alaska. Future analysis of the database by USGS scientists aims to yield many insights into the status of seabird and marine mammal populations, while the live online database meets the Obama Administration’s directive of "Expanding Public Access to the Results of Federally Funded Research."
The NPPSD and Users Guide are available from the USGS Alaska Science Center website.
The U.S. Geological Survey, in collaboration with the U.S. Fish and Wildlife Service, has released a study that will enable ecologists, managers, policy makers, and industry to predict the bird fatalities at a wind facility prior to it being constructed.
The study examined golden eagles as a case study because they are susceptible to collisions with wind turbines in part because of their soaring and hunting behavior.
Bird fatalities due to collisions with rotating turbine blades are a leading concern for wildlife and wind facility managers. This new model builds upon previous approaches by directly acknowledging uncertainty inherent in predicting these fatalities. Furthermore, the computer code provided makes it possible for other researchers and managers to readily apply the model to their own data.
The model looks at only three parameters: hazardous footprint, bird exposure to turbines and collision probability. “This simplicity is part of what makes the model accessible to others,” said Leslie New, assistant professor of statistics at Washington State University, who led the research project as a USGS postdoctoral fellow. “It also allows wind facility developers to consider ways to reduce bird fatalities without having to collect a complicated set of data.”
High rates of bird fatalities do not occur at every wind facility. The geographic location, local topographic features, the bird species and its life history, as well as other factors all play a role in the number of fatalities.
Taking advantage of publically available information, research scientists incorporated a wealth of biological knowledge into their model to improve fatality predictions.
“Uncertainty in this model can be reduced once data on the actual number of fatalities are available at an operational wind facility,” said New.
To establish the utility of their approach, the scientists applied their model to golden eagles at a Wyoming wind facility. Their long-life span combined with delayed reproduction and small brood size means that there are potential population-level effects of this additional source of mortality.
Golden eagles are protected under the Bald and Golden Eagle Protection Act and the Migratory Bird Treaty Act. The combination of law, conservation concerns, and renewable-energy development led the USFWS to develop a permitting process for wind facilities. The USFWS permitting process requires that fatality predictions be made in advance of a wind facility’s construction. This allows the facility’s impact to be assessed and any mitigation measures related to turbine placement on the landscape to be taken. The new model was developed specifically for the purpose of assessing take as part of the preconstruction permitting process.
The study supports a conservative approach and the researchers’ model is used to inform this permitting process and balance management of eagle fatalities.
The article, “A collision risk model to predict avian fatalities at wind facilities: an example using golden eagles, Aquila chrysaetos” by L.F. New, E. Bjerre, B. Millsap, M. Otto and M. Runge, is available in PLOS ONE online.
About the Golden Eagle:
The golden eagle has a vast range, from the tundra through grassland, forested habitat and woodland brushland south to arid deserts including Death Valley, California. They are aerial predators that build nests on cliffs or in the largest trees of forested stands that often afford an unobstructed view of the surrounding habitat.
This oblique aerial photograph from 2006 shows the Barter Island long-range radar station landfill threatened by coastal erosion. The landfill was subsequently relocated further inland, however, the coastal bluffs continue to retreat. (High resolution image)
ANCHORAGE, Alaska — In a new study published today, scientists from the U.S. Geological Survey found that the remote northern Alaska coast has some of the highest shoreline erosion rates in the world. Analyzing over half a century of shoreline change data, scientists found the pattern is extremely variable with most of the coast retreating at rates of more than 1 meter a year.
“Coastal erosion along the Arctic coast of Alaska is threatening Native Alaskan villages, sensitive ecosystems, energy and defense related infrastructure, and large tracts of Native Alaskan, State, and Federally managed land,” said Suzette Kimball, acting director of the USGS.
Scientists studied more than 1600 kilometers of the Alaskan coast between the U.S. Canadian border and Icy Cape and found the average rate of shoreline change, taking into account beaches that are both eroding and expanding, was -1.4 meters per year. Of those beaches eroding, the most extreme case exceeded 18.6 meters per year.
“This report provides invaluable objective data to help native communities, scientists and land managers understand natural changes and human impacts on the Alaskan coast,” said Ann Gibbs, USGS Geologist and lead author of the new report.
Coastlines change in response to a variety of factors, including changes in the amount of available sediment, storm impacts, sea-level rise and human activities. How much a coast erodes or expands in any given location is due to some combination of these factors, which vary from place to place.
"There is increasing need for this kind of comprehensive assessment in all coastal environments to guide managed response to sea-level rise and storm impacts," said Dr. Bruce Richmond of the USGS. "It is very difficult to predict what may happen in the future without a solid understanding of what has happened in the past. Comprehensive regional studies such as this are an important tool to better understand coastal change. ”
Compared to other coastal areas of the U.S., where four or more historical shoreline data sets are available, generally back to the mid-1800s, shoreline data for the coast of Alaska are limited. The researchers used two historical data sources, from the 1940s and 2000s, such as maps and aerial photographs, as well as modern data like lidar, or “light detection and ranging,” to measure shoreline change at more than 26,567 locations.
There is no widely accepted standard for analyzing shoreline change. The impetus behind the National Assessment project was to develop a standardized method of measuring changes in shoreline position that is consistent on all coasts of the country. The goal was to facilitate the process of periodically and systematically updating the results in a consistent manner.
The report, titled “National Assessment of Shoreline Change: Historical Shoreline Change Along the North Coast of Alaska, U.S.-Canadian Border to Icy,” is the 8th Long-Term Coastal Change report produced as part of the USGS’s National Assessment of Coastal Change Hazards project. A comprehensive database of digital vector shorelines and rates of shoreline change for Alaska, from the U.S.-Canadian border to Icy Cape, is presented along with this report. Data for all 8 long-term coastal change reports are also available on the USGS Coastal Change Hazards Portal.
ANCHORAGE, Alaska — Greenhouse gas emissions remain the primary threat to the preservation of polar bear populations worldwide. This conclusion holds true under both a reduced greenhouse gas emission scenario that stabilizes climate warming and another scenario where emissions and warming continue at the current pace, according to updated U.S. Geological Survey research models.
Under both scenarios, the outcome for the worldwide polar bear population will very likely worsen over time through the end of the century.
The modeling effort examined the prognosis for polar bear populations in the four ecoregions (see map) comprising their range using current sea ice projections from the Intergovernmental Panel on Climate Change for two greenhouse gas emission scenarios. Both scenarios examined how greenhouse gas emissions may affect polar bears: one looked at stabilization in climate warming by century’s end because of reduced GHG emissions, and the other looked at unabated (unchanged) rates of GHG emissions, leading to increased warming by century’s end.
“Addressing sea ice loss will require global policy solutions to reduce greenhouse gas emissions and likely be years in the making,” said Mike Runge, a USGS research ecologist. “Because carbon emissions accumulate over time, there will be a lag, likely on the order of several decades, between mitigation of emissions and meaningful stabilization of sea ice loss.”
Under the unabated emission scenario, polar bear populations in two of four ecoregions were projected to reach a greatly decreased state about 25 years sooner than under the stabilized scenario. Under the stabilized scenario, GHG emissions peak around 2040, decline through 2080, then decline through the end of the century. In this scenario, USGS projected that all ecoregion populations will greatly decrease except for the Archipelago Ecoregion, located in the high-latitude Canadian Arctic, where sea ice generally persists longer in the summer. These updated modeling outcomes reinforce earlier suggestions of the Archipelago’s potential as an important refuge for ice-dependent species, including the polar bear.
The models, updated from 2010, evaluated specific threats to polar bears such as sea ice loss, prey availability, hunting, and increased human activities, and incorporated new findings on regional variation in polar bear response to sea ice loss.
“Substantial sea ice loss and expected declines in the availability of marine prey that polar bears eat are the most important specific reasons for the increasingly worse outlook for polar bear populations,” said Todd Atwood, research biologist with the USGS, and lead author of the study. “We found that other environmental stressors such as trans-Arctic shipping, oil and gas exploration, disease and contaminants, sustainable harvest and defense of life takes, had only negligible effects on polar bear populations—compared to the much larger effects of sea ice loss and associated declines in their ability to access prey.”
Additionally, USGS researchers noted that if the summer ice-free period lengthens beyond 4 months – as forecasted to occur during the last half of this century in the unabated scenario – the negative effects on polar bears will be more pronounced. Polar bears rely on ice as the platform for hunting their primary prey – ice seals – and when sea ice completely melts in summer, the bears must retreat to land where their access to seals is limited. Other research this year has shown that terrestrial foods available to polar bears during these land-bound months are unlikely to help polar bear populations adapt to sea ice loss.
USGS scientists’ research found that managing threats other than greenhouse gas emissions could slow the progression of polar bear populations to an increasingly worse status. The most optimistic prognosis for polar bears would require immediate and aggressive reductions of greenhouse gas emissions that would limit global warming to less than 2°C above preindustrial levels.
The U.S. Fish and Wildlife Service listed the polar bear as threatened under the Endangered Species Act in 2008 due to the threat posed by sea ice loss. The polar bear was the first species to be listed because of climate change. A plan to address recovery of the polar bear will be released into the Federal Register by the USFWS for public review on July 2, 2015.
The updated forecast for polar bears was developed by USGS as part of its Changing Arctic Ecosystems Initiative, together with collaborators from the U.S. Forest Service and Polar Bears International. The polar bear forecasting report is available online.Polar Bear Ecoregions: In the Seasonal Ice Ecoregion (see map), sea ice melts completely in summer and all polar bears must be on land. In the Divergent Ice Ecoregion, sea ice pulls away from the coast in summer, and polar bears must be on land or move with the ice as it recedes north. In the Convergent Ice and Archipelago Ecoregions, sea ice is generally retained during the summer. (High resolution image)
The amount of water required to hydraulically fracture oil and gas wells varies widely across the country, according to the first national-scale analysis and map of hydraulic fracturing water usage detailed in a new USGS study accepted for publication in Water Resources Research, a journal of the American Geophysical Union. The research found that water volumes for hydraulic fracturing averaged within watersheds across the United States range from as little as 2,600 gallons to as much as 9.7 million gallons per well.This map shows the average water use in hydraulic fracturing per oil and gas well in watersheds across the United States. (High resolution image)
In addition, from 2000 to 2014, median annual water volume estimates for hydraulic fracturing in horizontal wells had increased from about 177,000 gallons per oil and gas well to more than 4 million gallons per oil well and 5.1 million gallons per gas well. Meanwhile, median water use in vertical and directional wells remained below 671,000 gallons per well. For comparison, an Olympic-sized swimming pool holds about 660,000 gallons.
“One of the most important things we found was that the amount of water used per well varies quite a bit, even within a single oil and gas basin,” said USGS scientist Tanya Gallegos, the study’s lead author. “This is important for land and resource managers, because a better understanding of the volumes of water injected for hydraulic fracturing could be a key to understanding the potential for some environmental impacts.”This map shows the percentage of oil and gas wells that use horizontal drilling in watersheds across the United States. (High resolution image)
Horizontal wells are those that are first drilled vertically or directionally (at an angle from straight down) to reach the unconventional oil or gas reservoir and then laterally along the oil or gas-bearing rock layers. This is done to increase the contact area with the reservoir rock and stimulate greater oil or gas production than could be achieved through vertical wells alone.
However, horizontal wells also generally require more water than vertical or directional wells. In fact, in 52 out of the 57 watersheds with the highest average water use for hydraulic fracturing, over 90 percent of the wells were horizontally drilled.
Although there has been an increase in the number of horizontal wells drilled since 2008, about 42 percent of new hydraulically fractured oil and gas wells completed in 2014 were still either vertical or directional. The ubiquity of the lower-water-use vertical and directional wells explains, in part, why the amount of water used per well is so variable across the United States.
The watersheds where the most water was used to hydraulically fracture wells on average coincided with parts of the following shale formations:
- Eagle Ford (within watersheds located mainly in Texas)
- Haynesville-Bossier (within watersheds located mainly in Texas & Louisiana)
- Barnett (within watersheds located mainly in Texas)
- Fayetteville (within watersheds located in Arkansas)
- Woodford (within watersheds located mainly in Oklahoma)
- Tuscaloosa (within watersheds located in Louisiana & Mississippi)
- Marcellus & Utica (within watersheds located in parts of Ohio, Pennsylvania, West Virginia and within watersheds extending into southern New York)
Shale gas reservoirs are often hydraulically fractured using slick water, a fluid type that requires a lot of water. In contrast, tight oil formations like the Bakken (in parts of Montana and North Dakota) often use gel-based hydraulic fracturing treatment fluids, which generally contain lower amounts of water.
This research was carried out as part of a larger effort by the USGS to understand the resource requirements and potential environmental impacts of unconventional oil and gas development. Prior publications include historical trends in the use of hydraulic fracturing from 1947-2010, as well as the chemistry of produced waters from hydraulically fractured wells.
The report is entitled “Hydraulic fracturing water use variability in the United States and potential environmental implications,” and has been accepted for publication in Water Resources Research. More information about this study and other USGS energy research can be found at the USGS Energy Resources Program. Stay up to date on USGS energy science by signing up for our quarterly Newsletter or following us on Twitter!
Wading bird numbers in the Florida Everglades are driven by water patterns that play out over multiple years according to a new study by the U.S. Geological Survey and Florida Atlantic University. Previously, existing water conditions were seen as the primary driving factor affecting numbers of birds, but this research shows that the preceding years’ water conditions and availability are equally important.
“We’ve known for some time that changes in water levels trigger a significant response by wading birds in the Everglades,” said James Beerens, the study’s lead author and an ecologist at USGS. “But what we discovered in this study is the importance of history. What happened last year can tell you what to expect this year.”
From 2000 to 2009, scientists examined foraging distribution and abundance data for wading bird populations, including Great Egrets, White Ibises, and threatened Wood Storks. To do the research, they conducted reconnaissance flights across the Greater Everglades system, an area that includes Big Cypress National Preserve and Everglades National Park. They found climate and water management conditions going as far back as three years influenced current bird population numbers and distribution.
“We know wading birds depend on small fish and invertebrates for food,” said Dale Gawlik, director of FAU’s Environmental Science Program and study coauthor. “What is interesting is the ‘lag effect’; wet conditions that build up invertebrate and fish numbers may not immediately result in increased bird numbers until after several more wet years.”
This new information has allowed scientists to improve existing wading bird distribution models providing a more accurate tool to estimate wading bird numbers under climate change scenarios and hydrological restoration scenarios proposed for the Everglades.
In the Everglades, food items such as small fish and crayfish are concentrated from across the landscape into pools as water levels recede throughout the dry season. It does not always work that way anymore due to a lack of water and loss of habitat in Everglades marshes. This new research shows that under the right dry season conditions following a water pulse in previous years, wading bird food is even further concentrated in near-perfect water depths, setting off a boom in the numbers of young wading birds that add to the population.
Beerens and computer scientists from the USGS have also developed publically available software as an extension to this work that predicts wading bird numbers in the Everglades based on real-time, current conditions, in addition to historical settings. This new model allows managers to simulate the effect of various management strategies that can have an impact on future bird numbers. The number and distribution of wading birds serve as an important indicator of ecosystem health in the Everglades. Beerens further explained that “increased seasonal water availability in drier areas of the Everglades stimulates the entire ecosystem, as reflected in the wading birds.”
Altered water patterns resulting from land-use and water management changes have reduced wading bird numbers throughout the Everglades by about 90 percent since the turn of the 20th Century. This research shows that current management and use of water is equally important.
“Our findings also suggest that we can continue to improve the Everglades and its wading bird community by restoring water availability to areas that are over drained,” said Beerens. “There is increasing understanding that water availability and proper management make this entire ecological and economic engine work.”
Florida generates more than $3 billion in annual revenue from resident and nonresident wildlife watchers according to estimates from the U.S. Fish and Wildlife Service. Of the 1.9 million people who view wildlife in Florida while ‘away-from-home’ each year, more than 1.3 million watch wading birds and other water-dependent birds.
The study, “Linking Dynamic Habitat Selection with Wading Bird Foraging Distributions across Resource Gradients,” was published in the journal PLOS ONE and can be found online.
Scientists are expecting that this year’s Chesapeake Bay hypoxic low-oxygen zone, also called the “dead zone,” will be approximately 1.37 cubic miles – about the volume of 2.3 million Olympic-size swimming pools. While still large, this is 10 percent lower than the long-term average as measured since 1950.
The anoxic portion of the zone, which contains no oxygen at all, is predicted to be 0.27 cubic miles in early summer, growing to 0.28 cubic miles by late summer. Low river flow and low nutrient loading from the Susquehanna River this spring account for the smaller predicted size.
This is the ninth year for the Bay outlook which, because of the shallow nature of large areas of the estuary, focuses on water volume or cubic miles, instead of square mileage as used in the Gulf of Mexico dead zone forecast announced last week. The history of hypoxia in the Chesapeake Bay since 1985 can be found at EcoCheck, a website from the University of Maryland Center for Environmental Science.
The Bay’s hypoxic and anoxic zones are caused by excessive nutrient pollution, primarily from human activities such as agriculture and wastewater. The nutrients stimulate large algal blooms that deplete oxygen from the water as they decay. The low oxygen levels are insufficient to support most marine life and habitats in near-bottom waters and threaten the Bay’s production of crabs, oysters and other important fisheries.
The Chesapeake Bay Program coordinates a multi-year effort to restore the water and habitat quality to enhance its productivity. The forecast and oxygen measurements taken during summer monitoring cruises are used to test and improve our understanding of how nutrients, hydrology, and other factors affect the size of the hypoxic zone. They are key to developing effective hypoxia reduction strategies.
The predicted “dead zone” size is based on models that forecast three features of the zone to give a comprehensive view of expected conditions: midsummer volume of the low-oxygen hypoxic zone, early-summer oxygen-free anoxic zone, and late-summer oxygen-free anoxic zone. The models were developed by NOAA-sponsored researchers at the University of Maryland Center for Environmental Science and the University of Michigan. They rely on nutrient loading estimates from the U. S. Geological Survey.
"These ecological forecasts are good examples of the critical environmental intelligence products and tools that NOAA is providing to stakeholders and interagency management bodies such as the Chesapeake Bay Program," said Kathryn D. Sullivan, Ph.D., under secretary of commerce for oceans and atmosphere and NOAA administrator. “With this information, we can work collectively on ways to reduce pollution and protect our marine environments for future generations.”
The hypoxia forecast is based on the relationship between nutrient loading and oxygen. Aspects of weather, including wind speed, wind direction, precipitation and temperature also impact the size of dead zones. For example, in 2014, sustained winds from Hurricane Arthur mixed Chesapeake Bay waters, delivering oxygen to the bottom and dramatically reducing the size of the hypoxic zone to 0.58 cubic miles.
"Tracking how nutrient levels are changing in streams, rivers, and groundwater and how the estuary is responding to these changes is critical information for evaluating overall progress in improving the health of the Bay,” said William Werkheiser, USGS associate director for water. "Local, state and regional partners rely on this tracking data to inform their adaptive management strategies in Bay watersheds."
The USGS provides the nutrient runoff and river stream data that are used in the forecast models. USGS estimates that 58 million pounds of nitrogen were transported to the Chesapeake Bay from January to May 2015, which is 29 percent below average conditions. The Chesapeake data are funded through a cooperative agreement between USGS and the Maryland Department of Natural Resources. USGS operates more than 400 real-time stream gages and collects water quality data at numerous long-term stations throughout the Chesapeake Bay basin to track how nutrient loads are changing over time.
"Forecasting how a major coastal ecosystem, the Chesapeake Bay, responds to decreasing nutrient pollution is a challenge due to year-to-year variations and natural lags," said Dr. Donald Boesch, president of the University of Maryland Center for Environmental Science, "But we are heading in the right direction."
Later this year researchers will measure oxygen levels in the Chesapeake Bay. The final measurement in the Chesapeake will come in October following surveys by the Chesapeake Bay Program's partners from the Maryland Department of Natural Resources (DNR) and the Virginia Department of Environmental Quality. Bimonthly monitoring cruise updates on Maryland Bay oxygen levels can be found on DNR’s Eyes on the Bay website.
Scientists are expecting that this year’s Gulf of Mexico hypoxic zone, also called the “dead zone,” will be approximately 5,483 square miles or about the size of Connecticut — the same as it has averaged over the last several years.
The dead zone in the Gulf of Mexico affects nationally important commercial and recreational fisheries and threatens the region's economy. Hypoxic zones hold very little oxygen, and are caused by excessive nutrient pollution, primarily from activities such as agriculture and wastewater. The low oxygen levels cannot support most marine life and habitats in near-bottom waters.
This year marks the first time the results of four models were combined. The four model predictions ranged from 4,344 to 5,985 square miles, and had a collective predictive interval of 3,205 to 7,645 square miles, which take into account variations in weather and oceanographic conditions.
The NOAA-sponsored Gulf of Mexico hypoxia forecast has improved steadily in recent years, a result of advancements of individual models and an increase in the number of models used for the forecast. Forecasts based on multiple models are called ensemble forecasts and are commonly used in hurricane and other weather forecasts.
The ensemble models were developed by NOAA-sponsored modeling teams and researchers at the University of Michigan, Louisiana State University, Louisiana Universities Marine Consortium, Virginia Institute of Marine Sciences/College of William and Mary, Texas A&M University, North Carolina State University, and the U.S.Geological Survey (USGS). The hypoxia forecast is part of a larger NOAA effort to deliver ecological forecasts that support human health and well-being, coastal economies, and coastal and marine stewardship.
“NOAA, along with our partners, continues to improve our capability to generate environmental data that can help mitigate and manage this threat to Gulf fisheries and economies,” said Kathryn D. Sullivan, Ph.D., under secretary of commerce for oceans and atmosphere and NOAA administrator. “We are adding models to increase the accuracy of our dead zone forecast."
The Gulf of Mexico hypoxia forecast is based on nutrient runoff and river stream data from the USGS. The USGS operates more than 3,000 real-time stream gauges, 50 real-time nitrate sensors, and collects water quality data at long-term stations throughout the Mississippi River basin to track how nutrient loads are changing over time.
The USGS estimates that 104,000 metric tons of nitrate and 19,300 metric tons of phosphorus flowed down the Mississippi and Atchafalaya rivers into the Gulf of Mexico in May 2015. This is about 21 percent below the long-term (1980-2014) average for nitrogen and 16 percent above the long-term average for phosphorus.
"Real-time nitrate sensors are advancing our understanding of how nitrate is transported in small streams and large rivers, including the main stem of the Mississippi River,” said William Werkheiser, USGS associate director for water. “Long-term monitoring is critical for tracking how nutrient levels are changing in response to management actions and for improving modeling tools to estimate which sources and areas are contributing the largest amounts of nutrients to the Gulf. "
The confirmed size of the 2015 Gulf hypoxic zone will be released in early August, following a monitoring survey led by the Louisiana Universities Marine Consortium from July 28 to August 4.
A new GPS survey of Mount McKinley, the highest point in North America, will update the commonly accepted elevation of McKinley’s peak, 20,320 ft. The last survey was completed in 1953.
The USGS, along with NOAA’s National Geodetic Survey (NGS), and the University of Alaska Fairbanks (UAF), are supporting a Global Positioning System (GPS) survey of the Mount McKinley apex. Surveying technology and processes have improved greatly since the last survey and the ability to establish a much more accurate height now exists. With the acquisition of new elevation (ifsar) data in Alaska as part of the 3D Elevation Program, there have been inquiries about the height of the summit. The survey party is being led by CompassData, a subcontractor for Dewberry on a task awarded under the USGS’ Geospatial Products and Services Contract (GPSC).
Using modern GPS survey equipment and techniques, along with better gravity data to improve the geoid model in Alaska, the partners will be able to report the summit elevation with a much higher level of confidence than has been possible in the past. It is anticipated the newly surveyed elevation will be published by the National Geodetic Survey in late August.
An experienced team of four climbers, one from UAF and three from CompassData, will start the precarious trek to the summit with the needed scientific instruments in tow, in the middle part of June. They plan to return on or before July 7 and begin work with the University of Alaska Fairbanks and NGS processing the data to arrive at the new summit elevation.At 20, 320 feet, Mount McKinley is North America’s highest peak. (Photo courtesy of Todd Paris, UAF). (High resolution image) Climbing Mount McKinley, North America’s highest peak, is a daunting task for even the most experienced mountaineers at Denali National Park in Alaska. (Photo courtesy of National Geographic). (High resolution image) The Mount McKinley survey team, and their equipment, are expected to face temperatures well below zero, high winds and frequent snow. Current forecast, courtesy of NOAA. (Photo courtesy of Todd Paris, UAF). (High resolution image)
North America may have once been attached to Australia, according to research just published in Lithosphere and spearheaded by U.S. Geological Survey geologist James Jones and his colleagues at Bucknell University and Colorado School of Mines.
Approximately every 300 million years, the Earth completes a supercontinent cycle wherein continents drift toward one another and collide, remain attached for millions of years, and eventually rift back apart. Geologic processes such as subduction and rifting aid in the formation and eventual break-up of supercontinents, and these same processes also help form valuable mineral resource deposits. Determining the geometry and history of ancient supercontinents is an important part of reconstructing the geologic evolution of Earth, and it can also lead to a better understanding of past and present mineral distributions.
North America is a key component in reconstructions of many former supercontinents, and there are strong geological associations between the western United States and Australia, which is one of the world’s leading mineral producers.
In this study, Jones and others synthesized mineral age data from ancient sedimentary rocks in the Trampas and Yankee Joe basins of Arizona and New Mexico. They found that the ages of many zircon crystals—mineral grains that were eroded from other rocks and embedded in the sedimentary deposits—were approximately 1.6 to 1.5 billion years old, an age range that does not match any known geologic age provinces in the entire western United States.
This surprising result actually mirrors previous studies of the Belt-Purcell basin (located in Montana, Idaho and parts of British Columbia, Canada) and a recently recognized basin in western Yukon, Canada, in which many zircon ages between 1.6 and 1.5 billion years old are common despite the absence of matching potential source rocks of this age.
However, the distinctive zircon ages in all three study locations do match the well known ages of districts in Australia and, to a slightly lesser known extent, Antarctica.
This publication marks the first time a complete detrital mineral age dataset has been compiled to compare the Belt basin deposits to strata of similar age in the southwestern United States. “Though the basins eventually evolved along very different trajectories, they have a shared history when they were first formed,” said Jones. “That history gives us clues as to what continents bordered western North America 1.5 billion years ago.”
The tectonic model presented in this paper suggests that the North American sedimentary basins were linked to sediment sources in Australia and Antarctica until the break up of the supercontinent Columbia. The dispersed components of Columbia ultimately reformed into Rodinia, perhaps the first truly global supercontinent in Earth’s history, around 1.0 billion years ago. Continued sampling and analysis of ancient sedimentary basin remnants will remain a critical tool for further testing global supercontinent reconstructions.