Reporters: Do you want to accompany USGS crews as they measure flooding? Please contact Jennifer LaVista or Heidi Koontz. Photos of the crews are available online.
U.S. Geological Survey real-time monitoring captured flash flooding in southwest Utah that occurred as a result of intense thunderstorms with rainfall rates estimated as high as three inches per hour. Particularly hard hit was the town of Hildale, Utah and Zion National Park. The flash flood has resulted in 16 deaths.
Two USGS field crews are making streamflow measurements at gages in the area and are determining how high and how fast the water moved during the flash flood event.
“Events like this are not uncommon in southwestern Utah,” said Cory Angeroth, hydrologist with the USGS. “Our crews are providing real-time streamflow information to emergency managers and National Weather Service (NWS) flood forecasters so that they can make informed flood management decisions as thunderstorms continue to move through the area.”
USGS scientists collect critical streamflow data that are vital for protection of life, property and the environment. These data are used by the NWS to develop flood forecasts; the Bureau of Reclamation and the U.S. Army Corps of Engineers to manage flood control; and local and state emergency management in their flood response activities. More information is available on the USGS Utah Water Science Center website.
There are 154 USGS-operated streamgages in Utah that measure water levels, streamflow and rainfall. Current streamflow conditions are available online.
More detailed information on flooding in Utah is available on the WaterWatch flood page.
For more than 125 years, the USGS has monitored flow in selected streams and rivers across the U.S. The information is routinely used for water supply and management, monitoring floods and droughts, bridge and road design, determination of flood risk and for many recreational activities.
Access current flood and high flow conditions across the country by visiting the USGS WaterWatch website. Receive instant, customized updates about water conditions in your area via text message or email by signing up for USGS WaterAlert.
A new interactive mapping tool provides predicted concentrations for 108 pesticides in streams and rivers across the Nation and identifies which streams are most likely to exceed water-quality guidelines for human health or aquatic life.
Citizens and water managers can create maps showing where pesticides are likely to occur in local streams and rivers and evaluate the likelihood of concentrations exceeding water-quality guidelines. The predictions can also be used to design cost-effective monitoring programs.
“Because pesticide monitoring is very expensive, we cannot afford to directly measure pesticides in all streams and rivers,” said William Werkeiser, USGS Associate Director for Water. “This model can be used to estimate pesticide levels at unmonitored locations to provide a national assessment of pesticide occurrence.”
“The USGS pesticide model is a valuable tool that we can use, along with other modeling and analytical tools, to evaluate data as we complete ecological risk assessments for pesticides,” said Dr. Donald J. Brady, Director, Environmental Fate and Effects Division, Office of Pesticide Programs, U.S. Environmental Protection Agency.
“Streams and rivers most vulnerable to pesticides can be assessed,” said Wes Stone, USGS hydrologist and lead developer of the model. “For instance, many streams in the Corn Belt region are predicted to have a greater than 50 percent probability that one or more pesticides exceed aquatic-life benchmarks.
The online mapping tool is based on a USGS statistical model — referred to as Watershed Regression for Pesticides (or “WARP”) — which provides key statistics for thousands of streams, including the probability that a pesticide may exceed a water-quality benchmark and the reliability of each prediction.
The WARP model estimates concentrations using information on the physical and chemical properties of pesticides, agricultural pesticide use, soil characteristics, hydrology, and climate.
The model used by the mapping tool is based on data from USGS monitoring of pesticides in streams across the Nation since 1992 as part of the National Water-Quality Assessment (NAWQA) Program. Since 1991, NAWQA has been a primary source of nationally consistent data and information on the quality of the Nation’s streams and groundwater. Objective and nationally consistent water-quality data and models provide answers to where, when, and why the Nation’s water quality is degraded and what can be done to improve it for human and ecosystem needs.
Interactive mapping of predicted pesticide levels for streams in the U.S. are available online.
National maps and trend graphs of agricultural use of 459 pesticides from 1992 to 2012 for the conterminous U.S. are also available online.
Modeled national perspective of the prevalence of the insecticide chlorpyrifos, 2012 data
Chlorpyrifos is an insecticide used commonly on cotton, corn, citrus, and almond crops. For 2012, streams in the Midwest, central Texas, southwest Florida, and the Central Valley in California were predicted to have chlorpyrifos levels with a greater likelihood of exceeding the acute fish aquatic life benchmark. Use the online mapper to view the spatial variability of over 100 pesticides. (High resolution image)
Several of the 1,028 new US Topo quadrangles for Florida now display parts of the Florida National Scenic Trail (FNST) and other designated public trails. For Gulf Coast residents, recreationalists and visitors who want to explore the featured Florida trails by biking, hiking, horseback or other means, the new trail features on the US Topo maps will be useful.
The FNST is a congressionally-designated, long-distance hiking trail that weaves its way across more than 1,000 miles of Florida from Big Cypress National Preserve in the south to Gulf Islands National Seashore in the western end of Florida’s panhandle. The Trail is a national treasure, being one of only 11 National Scenic Trails in the country, and one of three contained entirely within a single state.
“As administrators of the Florida National Scenic Trail, we work with a variety of partners to ensure that the trail is managed, and interpreted, consistently across changing landscapes and management boundaries,” said Shawn Thomas, Florida National Scenic Trail Program Manager. “Illustrating the FNST on the widely accessible US Topo maps will not only further this management purpose, but allow more recreationists and potential resource stewards to learn about the Florida National Scenic Trail and enjoy the natural, scenic, cultural, and historical resources the Trail corridor has to offer.”
The USGS partnered with the U.S. Forest Service to incorporate the trail data onto the revised Florida US Topo maps. The Florida National Scenic Trail, joins the Appalachian National Scenic Trail, Arizona National Scenic Trail, Continental Divide National Scenic Trail, Ice Age National Scenic Trail, Natchez Trace National Scenic Trail, New England National Scenic Trail, North Country National Scenic Trail, Pacific Crest National Scenic Trail, and the Pacific Northwest National Scenic Trail, as being featured on the new US Topo quads. The USGS plans to eventually include all National Scenic Trails in The National Map products.
The U.S. Forest Service has provided boundary and road data for the US Topo map series for the past five years, and is now working on a national dataset of recreational trails.
Some of the other data for new trails on the maps is provided to the USGS through a nationwide “crowdsourcing” project managed by the International Mountain Biking Association (IMBA). This unique crowdsourcing venture has increased the amount and diversity of trail data available through The National Map mobile and web apps, and the revised US Topo maps.
During the past two years the IMBA, in a partnership with the MTB Project, has been building a detailed national database of trails. This activity allows local IMBA chapters, IMBA members, and the public to provide trail data and descriptions through their website. MTB Project and IMBA then verify the quality of the trail data provided, ensure accuracy and confirm the trail is officially designated for public use.
Further significant additions to the new quadrangles include map symbol redesign, enhanced railroad information and new road source data.
The new 2015 US Topo map coverage over Florida replaces the first edition US Topo maps for the Sunshine State and are available for free download from The National Map, the USGS Map Locator & Downloader website , and several other USGS applications.
To compare change over time, scans of legacy USGS topo maps, some dating back to the late 1800s, can be downloaded from the USGS Historical Topographic Map Collection.
For more information on US Topo maps: http://nationalmap.gov/ustopo/.Updated 2015 version of the Spring Creek US Topo quadrangle with orthoimage turned on. (1:24,000 scale) (Larger image) Updated 2015 version of the Spring Creek US Topo quadrangle with orthoimage turned off to better show trails. (1:24,000 scale) (Larger image) Scan of the 1940 legacy topographic map quadrangle of the Spring Creek area (Arran quad, 1:62,500 scale) from the USGS Historic Topographic Map Collection. (Larger image) The Florida National Scenic Trail is currently more than 1,000 miles long, with 1,300 total miles planned. The U.S. Forest Service has divided the Trail into four main geographic regions: the Southern region, the Central region, the Northern region, and the Panhandle region. (Larger image)
A series of 100 photos may reduce the risk of Native Americans and Alaska Natives being exposed to or consuming water or food containing harmful cyanobacteria.
The colorful images are part of a new field and laboratory guide developed by the U.S. Geological Survey to help Native American and Alaska Native communities develop an awareness of what harmful algal blooms look like in the field and be able to distinguish them from non-toxic blooms.
Harmful algal blooms that are dominated by certain cyanobacteria are known to produce a variety of toxins that can negatively affect fish, wildlife and people. Exposure to these toxins can cause a range of effects from simple skin rashes to liver and nerve damage and even death, although rarely in people.
The issue may be increasing in importance, as scientists indicate warming global temperatures may exacerbate the growth of harmful algal blooms.
“We are likely to see more cyanobacterial blooms in the future as waters continue to warm,” said Barry Rosen, a USGS biologist and author of the guide. “Cyanobacteria proliferate in warm water temperatures, generally about 25 degrees Celsius (77 F), and are more tolerant of these warmer conditions than their competitors, such as green algae. We expect numerous other physiological adaptations will give cyanobacteria an advantage as global climate changes occur.”
While there are communities worldwide that may find the field and laboratory guide of use, those with people in direct contact with surface water or who consume fish and shellfish may find it particularly helpful.
“In the U.S., Native American and Alaska Native communities, especially those reliant on subsistence fishing or who have frequent contact with surface water, have an increased risk of exposure to cyanotoxins,” said Monique Fordham, the USGS National Tribal Liaison. “This guide will give them a new resource to use to monitor the waters they rely on and protect their people.”
Algae serve as the base of the food web in aquatic habitats. When algae cause a “problem,” typically a surface scum or accumulation on or near a shoreline, it is given the name “algae bloom” and many times is called a harmful algae bloom. An algae bloom forms under the correct environmental conditions, including nutrient abundance, stability of the water column, ample light, and optimal temperatures.
Although many different types of algae are responsible for harmful algae blooms, cyanobacteria, which produce natural cyanotoxins, pose the greatest problem and are the focus of this guide. The guide includes photos of what cyanobacteria blooms look like in a waterbody as well as photos of cyanobacteria taken through the microscope, which is needed to determine the type of bloom that is present.
The publication, “Field and Laboratory Guide to Freshwater Cyanobacteria Harmful Algal Blooms for Native American and Alaska Native Communities,” by Barry H. Rosen and Ann St. Amand, is available online.
Rosen, B.H., and St. Amand, Ann, Field and laboratory guide to freshwater cyanobacteria harmful algal blooms for Native American and Alaska Native Communities: U.S. Geological Survey Open-File Report 2015–1164.
OAKHURST, Calif. -- Overall fire threats to greater sage-grouse habitat are much higher in the western part of the species’ range than in the eastern part, according to a U.S. Geological Survey fire threats assessment study published today.
The USGS report provides a scientific assessment of a 30-year-period of comprehensive fire data (1984-2013) across sage-grouse management zones (see map) and vegetation types that include sagebrush as a major component. Researchers evaluated the implications of these findings for conservation and management of the greater sage-grouse in wildland areas across the species’ range.
The greater sage-grouse’s range is split up into seven management zones. The four western zones are distinguished from the three eastern zones due to differences in rainfall and vegetation, which affect fire regimes. Overall, the results indicate that fire threats are higher in the four western zones than in the three eastern management ones.
Fires have the potential to degrade habitat conditions for the greater sage-grouse, and there isevidence that fire is becoming increasingly more prevalent in the western United States, in many cases associated with the spread of invasive annual grasses.
“During recent decades, fire area and fire size have increased across large portions of the western zones, hindering recovery of sagebrush and threatening sage-grouse habitat,” said lead author Matthew Brooks, a USGS fire expert and research scientist at the USGS Western Ecological Research Center. “In contrast, parts of the eastern zones were less impacted by fire, and may actually have less fire than historically occurred.”
Sage-grouse rely on sagebrush habitat for food and breeding. This research focused specifically on sage-grouse habitat within sage-grouse population areas across the species’ 11-state range so that the research could best inform sage-grouse conservation and management efforts.
For example, noted Brooks, the fire history, vegetation type and soil moisture data developed in this study can be combined with other data to create science-based potential risk assessment maps for the establishment of a grass/fire cycle or habitat degradation information for the greater sage-grouse. “Also,” he added, “in light of these findings, it may be useful for managers to reconsider the relative importance of wildfire as a threat to greater sage-grouse in the eastern management zones.”
- Among the four western management zones, the Snake River Plain and the Columbia Basin ranked slightly higher than the Southern Great Basin and Northern Great Basin in terms of fire effects on sage-grouse habitat. These results support the previous high ranking of fire as a threat to the greater sage-grouse in the western region.
- The eastern zones were less impacted by fire, and may actually have less fire than historically occurred, especially in the Grassland vegetation type.
- Among the vegetation types with a dominant sagebrush component, Big Sagebrush accounted for 56 percent of vegetation burned in the 30 years examined, Black/Low Sagebrush 14 percent, Grassland 10 percent, Desert Mixed Scrub 4 percent, Mountain Brush 1 percent, and Floodplain less than1 percent. Non-Sagebrush accounted for 15 percent of the area burned.
- Increasing trends in percentage of fires in larger fire size classes were noted in the western management zones, but not in the eastern zones.
- Recurrent fire area (burning 2 or more times) encompassed 22 percent of the total fire area in the western region, most of which was in the Snake River Plain and in the Big Sagebrush vegetation type. Although the smallest amount of recurrent fire area was in the Columbia Basin, it represented 34 percent of the total fire area in that management zone.
- Fire rotation estimates were generally lower than published estimates of historical conditions for Big Sagebrush and Black/Low Sagebrush in the western management zones. In contrast, Big Sagebrush fire rotations were generally higher than historical estimates in the eastern management zone.
- The length of the fire season was fairly constant during the 30-year study period for all except the Southern Great Basin, Great Plains, and Wyoming Basin, which displayed significantly increasing trend towards longer fire seasons.
- Big Sagebrush, Black/Low Sagebrush and Desert Mixed Scrub are comprised mostly of areas that have low resilience to fire and recover slowly. Non-native annual grasses often grow back in these areas, which in turn increase fire size and frequency, decreasing suitable habitat for the greater sage-grouse.
- Floodplain, Grassland, and Mountain Brush are mostly in areas of relatively high resilience and resistance to non-native annual grasses.
This research, Fire patterns in the range of greater sage-grouse, 1984–2013—Implications for conservation and management, was authored by M.L. Brooks, J.R. Matchett, D.J. Shinneman, and P.S. Coates, all of USGS. U.S. Geological Survey Open-File Report 2015-1167.
About Greater Sage-Grouse
Greater sage-grouse occur in parts of 11 U.S. states and 2 Canadian provinces in western North America. The U.S. Fish and Wildlife Service is formally reviewing the status of greater sage-grouse to determine if the species is warranted for listing under the Endangered Species Act.Study area showing boundaries of each of the seven greater sage-grouse management zones (based on Stiver and others, 2006) that were a primary strata for analyses in this study. The greater sage-grouse population areas also are shown which represent the geographic extent of landscapes include in the analyses.(High resolution image)
Slowing fire-related population declines in greater sage-grouse in the Great Basin over the next 30 years may depend on the intensity of fire suppression efforts in core breeding areas and long-term patterns of precipitation, according to a just-published USGS-led study.
The authors conducted an extensive analysis of wildfire and climatic effects on greater sage-grouse population growth derived from 30 years (1984-2013) of breeding area-count data, along with wildfire and precipitation patterns. They constructed a model that also simulated different post-fire recovery times for sagebrush habitats based on soil attributes -- soil moisture and temperature maps -- that strongly influence resilience to wildfire and resistance to invasive grass species.
This research links multi-decadal patterns of wildfire across the Great Basin with multi-decadal data on greater sage-grouse population dynamics and climate.
If the current trend in wildfire continues unabated, model results predicted steady and substantial declines of greater sage-grouse populations across the Great Basin, with an average of about half of current population numbers being lost by the mid-2040s.
The researchers also found that greater sage-grouse populations increased following periods of above-average precipitation; however the long-lasting effects of wildfire in greater sage-grouse breeding areas negated the positive effects associated with precipitation.
Forecasted climate change may result in less precipitation and warmer, drier soils in sagebrush ecosystems, leaving greater sage-grouse habitat vulnerable to increasingly frequent wildfires. Fire is a natural process in sagebrush ecosystems, but burn size and frequency in the Great Basin have increased over the past few decades in response to the increasing expansion of invasive grasses, primarily cheatgrass.
Wildfires kill nearly all native species of sagebrush, which can transform the habitat into landscapes dominated by invasive grasses when soils are warm and dry. In turn, the presence of invasive grasses can prevent sagebrush from returning and, by serving as tinder, result in a positive feedback loop that promotes more wildfires in future years.
“Greater sage-grouse population persistence may be compromised as sagebrush ecosystems and sage-grouse habitat become more impacted by fire and a changing climate,” said Peter Coates, a research scientist with the USGS Western Ecological Research Center. “Our research shows that targeted fire suppression in core sage-grouse areas is vital to help conserve large blocks of the best habitat for sage-grouse in the Great Basin,”
Scientists also examined different management scenarios that could help offset adverse wildfire effects on greater sage-grouse populations, especially when focused on areas with the best sage-grouse habitat and the greatest number of breeding sage-grouse.
For example, reducing the trend in annual cumulative burned area near leks sites within 3.1 miles (5 km) by 25 percent in identified greater sage-grouse core areas is predicted to do little to prevent population declines over the next 30 years, but reducing it by 75 percent in the same period would substantially slow the decline even under below-average precipitation conditions, stabilize it under normal conditions and result in population growth under above-average conditions.
Coates noted that further long-term research can help identify populations that are most at risk from wildfire or changing climate and lead to more effective targeting of management resources for conservation of sagebrush and greater sage-grouse populations.
This peer-reviewed research, Long-term effects of wildfire on sage-grouse populations: an integration of population and ecosystem ecology for management in the Great Basin, was authored by Peter Coates, USGS; M.A. Ricca, USGS; B.G. Prochazka, USGS; , K.E. Doherty, USFWS; M.L. Brooks, USGS; and M.L. Casazza, USGS.
About Greater Sage-Grouse and the Great Basin
The Great Basin comprises more than 72.7 million hectares (more than 179 million acres) across five states: Nevada, Utah, Idaho, Oregon and California. Wildfire has been identified as a primary disturbance in the Great Basin.
Greater sage-grouse occur in parts of 11 U.S. states and 2 Canadian provinces in western North America. The U.S. Fish and Wildlife Service is formally reviewing the status of greater sage-grouse to determine if the species is warranted for listing under the Endangered Species Act.
MENLO PARK, Calif. — Some of the inner workings of Earth’s subduction zones and their “megathrust” faults are revealed in a paper published today in the journal “Science.” U.S. Geological Survey scientist Jeanne Hardebeck calculated the frictional strength of subduction zone faults worldwide, and the stresses they are under. Stresses in subduction zones are found to be low, although the smaller amount of stress can still lead to a great earthquake.
Subduction zone megathrust faults produce most of the world’s largest earthquakes. The stresses are the forces acting on the subduction zone fault system, and are the forces that drive the earthquakes. Understanding these forces will allow scientists to better model the physical processes of subduction zones, and the results of these physical models may give us more insight into earthquake hazards.
“Even a ‘weak’ fault, meaning a fault with low frictional strength, can accumulate enough stress to produce a large earthquake. It may even be easier for a weak fault to produce a large earthquake, because once an earthquake starts, there aren't as many strongly stuck patches of the fault that could stop the rupture,” explained lead author and USGS geophysicist Hardebeck.
Although the physical properties of these faults are difficult to observe and measure directly, their frictional strength can be estimated indirectly by calculating the directions and relative magnitudes of the stresses that act on them. The frictional strength of a fault determines how much stress it can take before it slips, creating an earthquake.
Evaluating the orientations of thousands of smaller earthquakes surrounding the megathrust fault, Hardebeck calculated the orientation of stress, and from that inferred that all of the faults comprising the subduction zone system have similar strength. Together with prior evidence showing that some subduction zone faults are “weak”, this implies that all of the faults are “weak”, and that subduction zones are “low-stress” environments.
A “strong” fault has the frictional strength equivalent to an artificial fault cut in a rock sample in the laboratory. However, the stress released in earthquakes is only about one tenth of the stress that a “strong” fault should be able to withstand. A “weak” fault, in contrast, has only the strength to hold about one earthquake's worth of stress. A large earthquake on a “weak” fault releases most of the stress, and before the next large earthquake the stress is reloaded due to motion of the Earth’s tectonic plates.
Catherine Puckett ( Phone: 352-377-2469 );
BOISE, Idaho — The network of greater sage-grouse priority areas is a highly centralized system of conservation reserves. The largest priority areas likely can support sage-grouse populations within their boundaries, but smaller priority areas will need to rely on their neighbors in the surrounding network to sustain local populations, according to new research by the U.S. Geological Survey.
Eleven western states and federal management agencies within the greater sage-grouse range have developed conservation plans that embrace the concept of priority areas – also referred to as Priority Areas for Conservation or PACs by the U.S. Fish and Wildlife Service. Priority areas are key habitats identified as having the highest number of greater sage-grouse and the resources of greatest benefit to the species. Priority areas were designed to balance the needs of sage-grouse with activities such as energy development by focusing conservation activities and regulating development in these areas.
“The development of the priority areas may represent one of the largest experiments in conservation reserve design for a single species,” said Michele Crist, USGS wildlife biologist and lead author of a new USGS Open-File Report. “We have an opportunity to understand the potential for these areas to function as a connected network to conserve greater sage-grouse populations.”
The researchers ranked the relative importance of all the priority areas using social network theory, centrality metrics, sage-grouse ecological minimums, and a range-wide map created by combining the priority area boundaries as delineated by the 11 western states within the sage-grouse range – Washington, Oregon, California, Idaho, Nevada, Montana, Utah, Wyoming, Colorado, North Dakota and South Dakota.
Centrality metrics described the number of connections any one priority area has to another priority area as well as how that priority area acts as a bridge between priority areas. These metrics, combined with an index of sage-grouse movement through the landscape based on a combination of ecological factors necessary to support sage-grouse, were used to rank the priority areas’ relative importance to one another. Highly ranked priority areas are large in size, more centrally located within the network, and surrounded by many other priority areas of various sizes.
The study shows that the loss or fragmentation of one of the larger, highly ranked priority areas could have a disproportionally large influence across the priority area network. The study also identifies linkages and narrow corridors between priority areas where landscape conditions are favorable for sage-grouse movement and may be important for sustaining sage-grouse movements among the priority areas. Other lower-ranking priority areas may also be important stepping stones along habitat corridors to connect smaller more isolated priority areas to allow movement of sage-grouse between priority areas.
“The current priority area network consists of large and small areas that collectively and individually address requirements important for maintaining greater sage-grouse populations,” said USGS research ecologist and co-author Steve Knick. “This study’s findings may help predict impacts to connectivity when priority areas are lost, degraded, or fragmented.”
Knick noted that it is critical for strategies focused on conserving greater sage-grouse to assess not just the size of the priority area and the number of connections, but also how sage-grouse are linked together to function as a viable population.
The study was funded by the U.S. Geological Survey and the U.S. Fish and Wildlife Service, and was prepared in cooperation with Western Association of Fish and Wildlife Agencies.
Greater sage-grouse occur in parts of 11 U.S. states and 2 Canadian provinces in western North America. The U.S. Fish and Wildlife Service is formally reviewing the status of greater sage-grouse to determine if the species is warranted for listing under the Endangered Species Act.
A new analysis of the largest known deposit of carbonate minerals on Mars helps limit the range of possible answers about how and why Mars changed from a world with watery environments billions of years ago to the arid Red Planet of today.
The modern Martian atmosphere is too thin for liquid water to persist on the surface. A denser atmosphere on ancient Mars could have kept water from immediately evaporating. It could also have allowed parts of the planet to be warm enough to keep liquid water from freezing. But if the atmosphere was once thicker, what happened to it? The new detective work makes one suspected route for atmospheric loss look less likely.
Christopher Edwards, a former Caltech researcher now with the U.S. Geological Survey in Flagstaff, Arizona, and Bethany Ehlmann of the California Institute of Technology and NASA Jet Propulsion Laboratory reported the findings and analysis in a paper posted online this month by the journal Geology.
Carbon dioxide makes up most of the Martian atmosphere. That gas can be pulled out of the air and sequestered in the ground by chemical reactions with rocks to form carbonate minerals. Years before the series of successful Mars missions in the past two decades, many scientists expected to find large Martian deposits of carbonates holding much of the carbon from the planet's original atmosphere. Instead, these missions have found low concentrations of carbonate distributed widely, but only a few concentrated deposits. By far the largest known carbonate-rich deposit on Mars covers an area at least the size of Delaware, in an area called Nili Fossae.
"The biggest carbonate deposit on Mars has, at most, twice as much carbon in it as the current Mars atmosphere," said Pasadena-based Ehlmann. "Even if you combined all known carbon reservoirs together, it is still nowhere near enough to sequester the thick atmosphere that has been proposed for the time when there were rivers flowing on the Martian surface."
Their estimate of how much carbon is locked into the Nili Fossae carbonate deposit uses observations from numerous Mars missions, including the Thermal Emission Spectrometer (TES) on NASA's Mars Global Surveyor orbiter, the mineral-mapping Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) and two telescopic cameras on NASA's Mars Reconnaissance Orbiter, and the Thermal Emission Imaging System (THEMIS) on NASA's Mars Odyssey orbiter.
Edwards and Ehlmann compare their tally of sequestered carbon at Nili Fossae to what would be needed to account for an early Mars atmosphere dense enough to sustain surface waters during the period when flowing rivers left their mark by cutting extensive river-valley networks. By their estimate, it would require more than 35 carbonate deposits the size of the one examined at Nili Fossae. They deem it unlikely that so many large deposits have been overlooked in numerous detailed orbiter surveys of the planet. While deposits from an even earlier time in Mars history could be deeper and better hidden, they don't help solve the thin-atmosphere conundrum at the time of valley network formation.
One possible explanation is that Mars did have a much denser atmosphere during its flowing-rivers period, and then lost most of it to outer space from the top of the atmosphere, rather than by sequestration in minerals. NASA's Curiosity Mars rover mission has found evidence for ancient top-of-atmosphere loss, based on the modern Mars atmosphere's ratio of heavier carbon to lighter carbon. Uncertainty remains about how much of that loss occurred before the period of valley formation; much may have happened earlier. NASA's MAVEN orbiter, examining the outer atmosphere of Mars since late 2014, may help reduce that uncertainty.
An alternative explanation, favored by Edwards and Ehlmann, is that the original Martian atmosphere had already lost most of its carbon dioxide by the era of valley formation.
"Maybe the atmosphere wasn't so thick by the time of valley network formation," Edwards said. "Instead of a Mars that was wet and warm, maybe it was cold and wet with an atmosphere that had already thinned. How warm would it need to have been for the valleys to form? Not very. In most locations, you could have had snow and ice instead of rain. You just have to nudge above the freezing point to get water to thaw and flow occasionally, and that doesn't require very much atmosphere."
Arizona State University, Tempe, provided the TES and THEMIS instruments. The Johns Hopkins University Applied Physics Laboratory, Laurel, Maryland, provided CRISM. JPL, a division of Caltech, manages the Mars Reconnaissance Orbiter and Mars Odyssey project for NASA's Science Mission Directorate, Washington, and managed the Mars Global Surveyor project through its nine years of orbiter operations at Mars. Lockheed Martin Space Systems in Denver built the three orbiters.
For more information about USGS work with the Mars Reconnaissance Orbiter mission, visit the Astrogeology Science Center website.
For more information about USGS work, visit the Mars Odyssey mission website.Rocks Here Sequester Some of Mars' Early Atmosphere.
This image combines data from two instruments (High Resolution Imaging Science Experiment and Compact Reconnaissance Imaging Spectrometer for Mars) onboard NASA’s Mars Reconnaissance Orbiter to map color-coded composition over the shape of the ground within the Nili Fossae plains region of Mars. Carbonate-rich deposits in this area (green hues) hold some carbon formerly in the atmosphere's carbon dioxide, while sand dunes (brown hues) are composed of olivine-bearing basalt and purple hues are basaltic in composition. (Larger image) Multiple Instruments Used for Mars Carbon Estimate.
The left image of this pair presents data from the Thermal Emission imaging system onboard NASA’s Mars Odyssey orbiter. The color-coding indicates thermal inertia -- the physical property of how quickly a surface material heats up or cools off. Sand, for example (blue hues), cools off quicker after sundown than bedrock (red hues) does.
The right images of this pair presents the regional composition from the Compact Reconnaissance Imaging Spectrometer for Mars onboard the NASA’s Mars Reconnaissance Orbiter. Green hues are consistent with carbonate-rich materials, while brown/yellow hues are olivine-bearing sands, and locations with purple hues are basaltic in composition. The gray-scale base map is a mosaic of daytime THEMIS infrared images. (Larger image)
Mark Newell, APR ( Phone: 573-308-3850 );
A new, official height for Denali has been measured at 20,310 feet, just 10 feet less than the previous elevation of 20,320 feet which was established using 1950’s era technology.
With this slightly lower elevation, has the tallest mountain in North America shrunk? No, but advances in technology to better measure the elevation at the surface of the Earth have resulted in a more accurate summit height of Alaska’s natural treasure.
“No place draws more public attention to its exact elevation than the highest peak of a continent. Knowing the height of Denali is precisely 20,310 feet has important value to earth scientists, geographers, airplane pilots, mountaineers and the general public. It is inspiring to think we can measure this magnificent peak with such accuracy," said Suzette Kimball, USGS acting director. "This is a feeling everyone can share, whether you happen to be an armchair explorer or an experienced mountain climber.”
Denali National Park where the mountain is located, was established in 1917 and annually sees more than 500,000 visitors to the six million acres that now make up the park and preserve. About 1,200 mountaineers attempt to summit the mountain each year; typically about half are successful.
"Park rangers have been excited to work with and learn from their USGS colleagues using the latest technology to determine Denali's height,” said Denali NP Superintendent Don Striker. “Climbers and other visitors will be fascinated by this process, and I hope our future park rangers see from this firsthand example how a background in science, technology, engineering and mathematics, and staying physically active in the outdoors can enable them to do some of America's coolest jobs.”
To establish a more accurate summit height, the USGS partnered with NOAA’s National Geodetic Survey (NGS), Dewberry, CompassData, (a subcontractor to Dewberry) and the University of Alaska, Fairbanks, to conduct a precise Global Positioning System (GPS) measurement of a specific point at the mountain’s peak.
A previous 2013 Denali survey was called into question with an elevation measurement of 20,237 feet. That survey was done by an airborne radar measurement collected using an Interferometric Synthetic Aperture Radar (ifsar) sensor. Ifsar is an extremely effective tool for collecting map data in challenging areas such as Alaska, but it does not provide precise spot or point elevations, especially in very steep terrain.
The climbing team of GPS experts and mountaineers reached the Denali summit in mid-June. Since then, they have been processing, analyzing, and evaluating the raw data to arrive at the final number of 20, 310 feet. Unique circumstances and variables such as the depth of the snowpack and establishing the appropriate surface that coincides with mean sea level had to be taken into account before the new apex elevation could be determined.
A USGS feature story has more details about the trek, data collection and calculation methods.A view of Denali from the airplane as the Survey team approached the Kahiltna Glacier to begin their ascent to the mountain’s summit. Photo : Blaine Horner, CompassData) (Larger image) Two of the Survey climbers continue their trek up towards the next base camp, with gear in tow. Much of the climbing was done at night or early morning to take advantage of the frozen ground. (Photo: Blaine Horner, CompassData) (Larger image) Blaine Horner of CompassData probing the snow pack at the highest point in North America along with setting up Global Position System equipment for precise summit elevation data. (Photo: Blaine Horner, CompassData) (Larger image)
From the 1880s to the 1950s, the U.S. Geological Survey (USGS) engraved information from its surveys on metal plates (usually copper) as part of a lithographic printing process to reproduce topographic and geologic maps, geologic cross sections, and other illustrations. The engraved plates show point and line symbols and text for topography, hydrography, geology, and cultural features.
The U.S. General Services Administration (GSA) is selling by auction to the public 1,795 sets of excess USGS engravings. (A set includes the engravings that USGS used to reproduce an illustration.) The available sets portray mapped areas in most States and Puerto Rico. This effort follows the successful auction of sets that GSA conducted last spring.
Because of the large number of sets, GSA will auction the sets in four sales. Each sale will auction about 450 sets. The auctions will occur online through the GSA Auctions web site.
The first sale, for sets that map areas in the western United States, started on August 28 and will end on September 11, 2015. A new sale will start about every 30 days.
After the reserve price is met, the price for each set will be decided by the highest bid. In addition to the amount bid, successful bidders will incur the cost of receiving and shipping their sets from Herndon, Virginia, where the sets are located.
To support the auctions, USGS posted the inventory of sets, notes about the sets, map files that show the areas mapped by most sets, and other information. USGS also posts status updates and a list of Frequently Asked Questions (FAQs) weekly. All this information is publicly available in files that can be downloaded from the Engravings FTP site.
A malformed (’teratological’) chitinozoan specimen of the genus Ancyrochitina (a) and a morphologically normal specimen (b) of the same genus. Both of these Silurian microfossils are from the A1-61 well in Libya and are about 415 Ma old. Scale bars are 0.1 mm. (High resolution image)
Toxic metals such as iron, lead and arsenic may have helped cause mass extinctions in the world’s oceans millions of years ago, according to recent research from the U.S. Geological Survey, the National Center for Scientific Research, France; and Ghent University, Belgium. These findings largely came from studying “teratological” or malformed fossil plankton assemblages corresponding to the initial stages of extinction events approximately 420 million years ago that killed off most marine species
At that time, several mass extinction events shaped the evolution of life on our planet. Some of these short-lived events were responsible for eradication of up to 85 percent of marine species, however the exact kill-mechanism responsible for these crises remains poorly understood.
In a paper just published in Nature Communications, the scientists present evidence that malformed fossil remains of 415 million- year-old marine plankton contain highly elevated concentrations of heavy metals of the kind that can cause morphological abnormalities in today’s marine life. This led the authors to conclude that metal poisoning caused the observed malformation and may have contributed to the extinction of these and many other species.
“This paper is a testament to the power of multi-disciplinary research,” said USGS scientist Poul Emsbo, a lead author of the report. “Here, collaboration between a paleontologist and an ore-deposit geochemist has led to new data that unveils new processes that may ultimately explain the cause of catastrophic extinctions in earth history.”
The documented chemical behavior of the toxic metals correlates with previously observed disturbances in oceanic carbon, oxygen and sulfur signatures. Such behavior strongly suggests that these metal increases were a result of decreased oxygen in the ocean.
Thus, metal toxicity, and its expressions in fossilized malformations, could provide the missing link that relates organism extinctions to a widespread absence of ocean oxygen. As part of a series of complex systemic interactions accompanying oceanic geochemical variation, the mobilizations of metals in spreading low-oxygen waters may identify the early phase of the kill-mechanism that led to these catastrophic extinction events.
The recurring correlation between fossil malformations and Ordovician-Silurian extinction events raises the provocative prospect that toxic metal contamination may be a previously unrecognized contributing agent to many, if not all, extinction events in the ancient oceans.
Coastal managers and planners now have access to a new U.S. Geological Survey handbook that, for the first time, comprehensively describes the various models used to study and predict sea-level rise and its potential impacts on coasts.
Designed for the benefit of land managers, coastal planners, and policy makers in the United States and around the world, the handbook explains many of the contributing factors that account for sea-level change. It also highlights the different data, techniques, and models used by scientists and engineers to document historical trends of sea level and to forecast future rates and the impact to coastal systems and communities.
“Our goal was to introduce the non-expert to the broad spectrum of models and applications that have been used to predict environmental change for sea-level rise assessments,” said Thomas Doyle, Deputy Director of the National Wetlands Research Center in Lafayette, Louisiana, and the lead author of the guide. “We provide a simple explanation of the complex science and simulation models from published sources to help inform land management and adaptation decisions for areas under risk of rising sea levels.”
The scope and content of the handbook was developed from feedback received at dozens of training sessions held with coastal managers and planners of federal, state, and private agencies across the northern Gulf of Mexico. The sessions aimed to determine what tools and resources were currently in use and to explain the broad spectrum of data and models used in sea-level rise assessments from multiple disciplines, including geology, hydrology and ecology. Criteria were established to distinguish various characteristics of each model, including the source, scale and quality of information input and geographic databases, as well as the ease or difficulty of storing, displaying, or interpreting the model output.
“A handbook of this nature was identified as a high priority need by resource managers,” said Virginia Burkett, USGS Chief Scientist for Climate and Land Use Change. “[The handbook] will serve as a practical guide to the tools and predictive models that they can use to assess sea-level change impacts on coastal landscapes.”
The work was supported by the Department of Interior Southeast Climate Science Center, which is managed by the U.S. Geological Survey. The center is one of eight that provides scientific information to help natural resource managers respond effectively to climate change.
Trends in pesticide concentrations in 38 major rivers in the U.S. during 1992-2010 reflect large-scale trends in pesticide use and regulatory changes, according to a new study by the U.S. Geological Survey.
The study, the first to rigorously compare riverine pesticide concentrations with trends in pesticide use at the national scale, examined 11 pesticides that have sufficient historical data for trend analyses and that are among the top 20 most frequently detected in rivers and streams in the United States. Most of the 11 long-used chemicals had primarily downward trends in concentrations in most regions over the study period. Focusing on this group of 11 pesticides with the most extensive concentration data affords a unique opportunity to study the relations between river concentrations and use or other factors that may influence trends.
Trends in pesticide concentrations followed agricultural usage patterns and regulatory restrictions on use for pesticides used primarily on agricultural crops — cyanazine, alachlor, atrazine (and its degradate, deethylatrazine), metolachlor, and carbofuran.
"In major river basins, the overall influence of agricultural pesticide use is so strong," said Karen Ryberg, USGS statistician and lead of the study, "that any changes in other causes of trends in pesticide concentrations in the water — changes that might be traced to enhanced agricultural management practices — are difficult to discern, especially without improved data on both the use of specific pesticides and the timing, location, and extent of management practices."
Alachlor concentration trends in major rivers, for example, declined nationwide from 1992-2010 as the use of alachlor, a herbicide most commonly applied to corn, dropped from about 20,000 to 2,500 metric tons. The introduction of a new herbicide (acetochlor) and the increase in use of glyphosate-resistant corn and soybeans contributed to the nationwide decline in alachlor use.
For pesticides with substantial use in both agricultural and urban areas — simazine, chlorpyrifos, malathion, diazinon, and carbaryl — pesticide concentration trends in major rivers reflect both agricultural and nonagricultural usage patterns.
Urban contributions of pesticides have marked effects on concentration trends of some pesticides in major rivers, despite there being a much smaller area of urban land compared to agriculture in most river basins.
More than 400 pesticides are used in agriculture each year. Regulatory changes, market forces, and introduction of new pesticides continually alter the use of these pesticides over time. The USGS National Water-Quality Assessment Program currently monitors less than half of the pesticides currently being used for agriculture because of resource constraints. However, USGS is working to fill these gaps by monitoring new pesticides that come into use, such as the neonicotinoid and pyrethroid insecticides.
The article, "Trends in Pesticide Concentrations and Use for Major Rivers of the United States" by Karen Ryberg and Robert Gilliom, has recently been published in the journal Science of the Total Environment.
National maps and trend graphs that show the distribution of the agricultural use of 459 pesticides for each year during 1992-2012 in the conterminous U.S. are available online.
Drought- and bark-beetle-induced mortality in high- elevation whitebark pine (Pinus albicaulis) forests, northern Warner Mountains (Drake Peak), Oregon. (High resolution image)
SEQUOIA AND KINGS CANYON, Calif. — A new paper published today in Science magazine has synthesized existing studies on the health of temperate forests across the globe and found a sobering diagnosis. Longer, more severe, and hotter droughts and a myriad of other threats, including diseases and more extensive and severe wildfires, are threatening some of these forests with transformation. Without informed management, some forests could convert to shrublands or grasslands within the coming decades.
“While we have been trying to manage for resilience of 20th century conditions, we realize now that we must prepare for transformations and attempt to ease these conversions,” said Constance Millar, lead author and forest ecologist with the USDA Forest Service’s Pacific Southwest Research Station.
Many forests are remarkably resilient, re-growing after years of logging. Yet, the researchers note from review of the enormous body of work on the subject, climate change and rising global temperatures are giving rise to “hotter” droughts — droughts that exhibit a level of severity beyond that witnessed in the past century. During a hotter drought, high air temperatures overheat leaves and also increase the stress on trees by drawing the moisture from their tissues at faster rates than normal. Snow that would normally act as emergency water storage for trees during the dry season instead falls as rain.
Combined, these factors may cause abnormally high levels of forest mortality during hotter droughts.
“Some temperate forests already appear to be showing chronic effects of warming temperatures, such as slow increases in tree deaths,” said Nathan Stephenson, coauthor and ecologist with the U.S. Geological Survey. “But the emergence of megadisturbances, forest diebacks beyond the range of what we’ve normally seen over the last century, could be a game-changer for how we plan for the future.”
Chronic stress from drought and warming temperatures also expose temperate forests to insect and disease outbreaks. And as temperatures rise in many regions, fires grow in frequency and severity causing losses in private property, natural resources and lives.
Losing temperate forests to worsening droughts, megafires and insect and disease outbreaks could lead to widespread losses of forest ecosystem services like national park recreational areas, the researchers caution. Forests also play an important role in storing atmospheric carbon dioxide and watershed protection, for example. The scientists encourage future studies identifying forests most vulnerable to the effects of mega-disturbances. In some cases, forest managers may be able to preserve ecosystem services like carbon storage as temperate forests transition to new ecological states.
The paper “Temperate Forest Health in an Era of Emerging Megadisturbance” was released in the journal Science.
The Forest Service Pacific Southwest Research Station, headquartered in Albany, Calif., develops and communicates science needed to sustain forest ecosystems and other benefits to society. It has research facilities in California, Hawaii and the U.S.-affiliated Pacific Islands.
Spider monkeys (Ateles geoffroyi) are omnivores, often feeding on fruits and insects. (High resolution image)
In a paper released today in Science, a new model presents a common mathematical structure that underlies the full range of feeding strategies of plants and animals: from familiar parasites, predators, and scavengers to more obscure parasitic castrators and decomposers. Now ecologists can view all food-web interactions through the same lens using a common language to understand the natural world.
“Physicists use ‘string theory’ to decipher the universe, economists use complex regression methods to model the global economy, but what about the animals and plants that supply our food and that clean and produce the air we breathe?” said co-author Andrew Dobson, a professor in Princeton University’s Department of Ecology and Evolutionary Biology.
The model captures the structure of all the consumer-resource links, plants capturing sunlight, predators eating prey, and parasites eating hosts, that connect species in food webs. “It rolls a century’s worth of food-web mathematics into a single model,” said U.S. Geological Survey Ecologist and lead author Kevin Lafferty.
Although ecologists have previously assumed that different food web links had different structure, for example lions eating zebras operate in different ways than viruses causing disease, this new research finds that they share a common structure, but with distinct characteristics. Insights from past ecological research as well as new ecological models can now be viewed through a common framework akin to physics or chemistry. Co-author Armand Kuris of University of California Santa Barbara considers this “the first development of a unifying theory for ecology. With this approach we can now see the entire elephant, not just some of its parts.”
“Ecologists have long used mathematical equations to study how predators and diseases affect plant, animal and human populations,” said co-author Cheryl Briggs of UC Santa Barbara, “But these approaches have been idiosyncratic, limited in scope and full of hidden assumptions.”
The model emerged from a National Science Foundation sponsored working group organized by Lafferty and Dobson at the National Center for Ecological Analysis and Synthesis, a think tank at UC Santa Barbara where ecologists tackle big problems about the environment. The group first set out to reveal the hidden role of parasites in food webs. Early discussions took the group down the same road travelled by others - trying to find different functions to fit different types of parasites and predators.
After several years, the group realized that there was a consistent mathematical backbone underlying their efforts. Out of a jumble of seemingly unrelated and complicated mathematical expressions, they found a simple solution that generalized across a comprehensive range of ecological reactions and revealed previously unobserved similarities and hidden assumptions in classic ecological models. The solution provides a general mathematical framework for food-webs. Ecologists can use this general model to develop a deeper understanding of how the world functions ecologically; this will have profound implications for infectious diseases, fisheries, conservation and humans manage natural ecosystems.
The team anticipates their work will lead to a new generation of food web models that examine ecological structure more acutely and how that structure is responding to global change.
The paper “A General Consumer-Resource Population Model” published today in Science included authors from UC Santa Barbara, Stanford University, University of Bristol, Princeton University and the Santa Fe Institute.
Heidi Koontz ( Phone: 303-202-4763 );
Although the Grand Canyon segment of the Colorado River features one of the most remote ecosystems in the United States, it is not immune to exposure from toxic chemicals such as mercury according to newly published research in Environmental Toxicology and Chemistry.
The study, led by the U.S. Geological Survey, found that concentrations of mercury and selenium in Colorado River food webs of the Grand Canyon National Park, regularly exceeded risk thresholds for fish and wildlife. These risk thresholds indicate the concentrations of toxins in food that could be harmful if eaten by fish, wildlife and humans. These findings add to a growing body of research demonstrating that remote ecosystems are vulnerable to long-range transport and bioaccumulation of contaminants.
“Managing exposure risks in the Grand Canyon will be a challenge, because sources and transport mechanisms of mercury and selenium extend far beyond Grand Canyon boundaries,” said Dr. David Walters, USGS research ecologist and lead author of the study.
David Uberuaga, superintendent of Grand Canyon National Park, added, “studies like this continue to educate the public and highlight the threats that face the park and its resources."
The study examined food webs at six sites along nearly 250 miles of the Colorado River downstream from Glen Canyon Dam within Glen Canyon National Recreation Area and Grand Canyon National Park in the summer of 2008. The researchers found that mercury and selenium concentrations in minnows and invertebrates exceeded dietary fish and wildlife toxicity thresholds.
Although the number of samples was relatively low, mercury levels in rainbow trout, the most common species harvested by anglers in the study area, were below the EPA threshold that would trigger advisories for human consumption.
“The good news is that concentrations of mercury in rainbow trout were very low in the popular Glen Canyon sport fishery, and all of the large rainbow trout analyzed from the Grand Canyon were also well below the risk thresholds for humans,” said Dr. Ted Kennedy, USGS researcher and co-author of the study.
“We also found some surprising patterns of mercury in rainbow trout in the Grand Canyon. Biomagnification usually leads to large fish having higher concentrations of mercury than small fish. But we found the opposite pattern, where small, 3-inch rainbow trout in the Grand Canyon had higher concentrations than the larger rainbow trout that anglers target. This inverted pattern likely has something to do with the novel food web structure that has developed in Grand Canyon.”
Airborne transport and deposition -- with much of it coming from outside the country -- is most commonly identified as the mechanism for contaminant introduction to remote ecosystems, and this is a potential pathway for mercury entering the Grand Canyon food web. Also, long-range downstream transport from upstream sources can deliver contaminants to river food webs. This is the case for selenium in this study, where irrigation of selenium-rich soils in the upper Colorado River basin contributes much of the selenium that is present in the Colorado River in Grand Canyon.
Exposure to high levels of selenium and mercury has been linked to lower reproductive success, growth, and survival of fish and wildlife. No human consumption advisories are currently in place for fish harvested from the study area. However, to assess potential risks to humans that may consume fish from Grand Canyon or Glen Canyon National Recreation Area, additional studies are planned.
Research partners in this study include the Cary Institute of Ecosystem Studies, Montana State University, and Idaho State University.Map of study area showing sample location relative to Glen Canyon Dam on the Colorado River, Grand Canyon (AZ, USA).
“The USGS has expanded an excellent working relationship with the U.S. Forest Service to include adding their recreational trails to the data being integrated into The National Map and on our US Topo maps,” said Kari Craun, director of the USGS National Geospatial Technical Operations Center. “The value of adding trails in areas like Bridger-Teton is high because of the number of people using the trails. We are very excited about taking this first step toward incorporating U.S. Forest Service trails information on US Topo maps nationwide.”
The U.S. Forest Service has provided boundary and road data for the US Topo map series for the past five years, and is now working on a national dataset of recreational trails.
Also, a number of the new Wyoming maps contain segments of the Continental Divide National Scenic Trail (CDNST) and other selected public trails. Further substantial upgrades to the new quadrangles include map symbol redesign, enhanced railroad information and new road source data. Some of the data for the trails is provided to the USGS through a nationwide “crowdsourcing” project managed by the International Mountain Biking Association.
The 3,100-mile long CDNST runs from Canada to Mexico through Montana, Idaho, Wyoming, Colorado, and New Mexico. Crossing the spine of the North American continent numerous times, it traverses some of America's most spectacular and isolated scenery, offering views unlike any other trail in the world.
This NST joins the Ice Age National Scenic Trail, the Pacific Northwest National Scenic Trail the North Country National Scenic Trail, the Pacific Crest National Scenic Trail, the Arizona National Scenic Trail, the Appalachian National Scenic Trail, the Natchez Trace National Scenic Trail and the New England National Scenic Trail as being featured on the new US Topo quads. The USGS hopes to eventually include all National Scenic Trails in The National Map products.
Additionally, the new Wyoming US Topo maps will continue the inclusion and improvement of Public Land Survey System data. Wyoming was one of the first states to display this topographic layer several years ago. PLSS is a way of subdividing and describing land in the US. All lands in the public domain are subject to subdivision by this rectangular system of surveys, which is regulated by the U.S. Department of the Interior.
These new maps replace the first edition US Topo maps for Wyoming and are available for free download from The National Map, the USGS Map Locator & Downloader website , or several other USGS applications.
To compare change over time, scans of legacy USGS topo maps, some dating back to the late 1800s, can be downloaded from the USGS Historical Topographic Map Collection
For more information on US Topo maps: http://nationalmap.gov/ustopo/Updated 2015 version of the Green River Lakes quadrangle with orthoimage turned on. (1:24,000 scale) (Larger image) Scan of the 1919 USGS quadrangle of the Fremont Peaks area from the USGS Historic Topographic Map Collection. (1:125,000 scale) (Larger image) Updated 2015 version of the Green River Lakes quadrangle with orthoimage turned off to better see the various trail networks. (1:24,000 scale) (Larger image)
The National Trails System was established by Act of Congress in 1968. The Act grants the Secretary of Interior and the Secretary of Agriculture authority over the National Trails System. The Act defines four types of trails. Two of these types, the National Historic Trails and National Scenic Trails, can only be designated by Act of Congress. National scenic trails are extended trails located as to provide for maximum outdoor recreation potential and for the conservation and enjoyment of nationally significant scenic, historic, natural, and cultural qualities of the area through which such trails may pass.
There are 11 National Scenic Trails:
- Appalachian National Scenic Trail
- Pacific Crest National Scenic Trail
- Continental Divide National Scenic Trail
- North Country National Scenic Trail
- Ice Age National Scenic Trail
- Potomac Heritage National Scenic Trail
- Natchez Trace National Scenic Trail
- Florida National Scenic Trail
- Arizona National Scenic Trail
- New England National Scenic Trail
- Pacific Northwest National Scenic Trail
USGS discovered insecticides known as neonicotinoids in a little more than half of both urban and agricultural streams sampled across the United States and Puerto Rico, according to a study by the agency published today in Environmental Chemistry.
This study, conducted from 2011 to 2014, represents the first national-scale investigation of the environmental occurrence of neonicotinoid insecticides in agricultural and urban settings. The research spanned 24 states and Puerto Rico and was completed as part of ongoing USGS investigations of pesticide and other contaminant levels in streams.
“In the study, neonicotinoids occurred throughout the year in urban streams while pulses of neonicotinoids were typical in agricultural streams during crop planting season,” said USGS research chemist Michelle Hladik, the report’s lead author.
“The occurrence of low levels in streams throughout the year supports the need for future research on the potential impacts of neonicotinoids on aquatic life and terrestrial animals that rely on aquatic life,” said USGS scientist Kathryn Kuivila, the research team leader. "These results will serve as an important baseline for that future work."
The foundational study is the first step needed to set priorities for environmental exposure experiments and the potential for adverse impacts to terrestrial and aquatic organisms. Scientists and others have raised concerns about potential harmful effects of neonicotinoids on non-target insects, especially pollinating honey bees and native bees.
In May, the White House released the Strategy to Promote the Health of Honey Bees and Other Pollinators, which includes a Pollinator Research Action Plan.
"This research will support the overall goals of the Strategy, by helping to understand whether these water-borne pesticides, particularly at the low levels shown in this study, pose a risk for pollinators,” said Mike Focazio, program coordinator for the USGS Toxic Substances Hydrology Program.
At least one of the six neonicotinoids tested by USGS researchers was found in more than half of the sampled streams. No concentrations exceeded the United States Environmental Protection Agency’s aquatic life criteria, and all detected neonicotinoids are classified as not likely to be carcinogenic to humans.
Detections of the six neonicotinoids varied: imidicloprid was found in 37 percent of the samples in the national study, clothianidin in 24 percent, thiamethoxam in 21 percent, dinotefuran in 13 percent, acetamiprid in 3 percent, and thiacloprid was not detected.
Use of neonicotinoids to control pest insects has been increasing over the past decade, especially on corn and soybeans. Much of this increase is due to a shift from leaf applications to using the insecticides prophylactically on seeds.
The paper, “First National-Scale Reconnaissance of Neonicotinoid Insecticides in Streams across the USA,” was published in Environmental Chemistry. To learn more about the study, please see our science feature. To learn more about USGS environmental health science, please visit the USGS Environmental Health website and sign up for our GeoHealth Newsletter.
The results are in. And the public clearly wins.
In April 2015, the U.S. Geological Survey, the U.S. Environmental Protection Agency, and Blue Legacy International (a nonprofit organization) challenged solvers to use open government data sources to create compelling visualizations that would inform individuals and communities about nutrient pollution (high-levels of nitrogen and phosphorous that cause excessive growth of algae).
Nutrient pollution is one of America’s most widespread, costly, and challenging environmental problems. It degrades the nation’s waterways, municipal and industrial water resources, wildlife, recreation, and fishing. Nutrient pollution is far reaching and affects more than 100,000 miles of rivers and streams, close to 2.5 million acres of lakes, reservoirs, and ponds, and more than 800 square miles of bays and estuaries in the United States.
The ultimate goal for the visualization challenge is to inspire citizens to take action at the local watershed level to reduce nutrient pollution and thus help to prevent algal blooms and hypoxia.
Here are the results of the 2015 Visualizing Nutrients Challenge.
A Resource Out of Place: The Story of Phosphorus, Lake Erie, and Toxic Algal Blooms
This visualization, created by Matthew Seibert, Benjamin Wellington, and Eric Roy, of Landscape Metrics, uses USGS monitoring data to inform individuals and communities about phosphorus runoff to Lake Erie. The authors sought to “inspire multiple stakeholders to strive toward both better resource management and improved environmental quality.”
Demonstrating creative use of open water data and effective storytelling, the following visualization submissions warranted special recognition.
Short film illustrating nutrient levels on the Los Angeles River using a digital elevation model.
Catherine Griffiths, Isohale
How does increasing nutrients affect you?
Animated illustration and interactive nitrogen concentration tool.
Dr. Zofia Taranu
Interactive chart illustrating water quality results on the Loxahatchee River.
The Silent Predator of the Deep Blue: Hypoxia
Infographic explaining hypoxia.
Kayla Brady - Computer Aid, Inc.
Sathya Ram - Computer Aid, Inc.
Michael Ruiz - Computer Aid, Inc.
Matthew Peters - Computer Aid, Inc.
Thaumas Mathew - Computer Aid, Inc.
VizNut48: Nutrient Pollution in the US Surface Waters and Management Actions
ArcGIS map of US surface water plotting nutrient pollution results.
Visualizing Water Pollution Data Using Beck-Style Flow Path Maps
Illustration of water systems and site results modeled after public transit maps
Prof. Edward Aboufadel
Department of Mathematics, Grand Valley State University
Daniel P. Huffman
* These Challenge submissions can be viewed online.
First Place will receive $10,000. Both the Challenge Winner and Runners Up visualizations will be highlighted in a number of important forums, including a showcase at the Nutrient Sensor Summit in Washington, DC on August 12, 2015.
The Visualizing Nutrients Challenge is part of the broader work of the Challenging Nutrients Coalition. The coalition was formed in 2013 when the White House Office of Science and Technology Policy convened a group of federal agencies, universities, and non-profit organizations to seek innovative ways to address nutrient pollution.
This Challenge marks the starting point for further discussion and application of data visualization tools to help tell the stories of our water. Blue Legacy International, a water advocacy organization championed by global explorer Alexandra Cousteau, will promote the results of the Challenge across a variety of digital platforms, where anyone can join the discussion to advance three critical areas of data visualization for public awareness:
- Reliable and accurate use of water data,
- Effective and clear communication of water issues supported by data, and
- Transformation of complex water issue into relatable, tangible stories that inspire and activate the public.
Visualizing Nutrients builds on the activities of the Open Water Data Initiative that seeks to further integrate existing water datasets and make them more accessible to innovation and decision making. The Open Water Data Initiative works in conjunction with the President's Climate Data Initiative.
For additional information, visit the prize competition website.The results of the 2015 Visualizing Nutrients Challenge can be viewed online