What are the core requirements of wide area CBRNe training?

Written by Steven Pike, Argon Electronics

When you are required to conduct wide area emergency preparedness training – be it in the setting of a chemical, biological, radiological, nuclear, and explosive (CBRNe) school, a dedicated military center or an industrial facility – the ongoing challenge for any CBRNe instructor is to be able to create a scenario that is realistic, safe, reliable and cost effective.

Trainees need to be equipped with the practical knowledge and skills to respond with confidence to an enormous variety of potential live incidents. And each threat brings with it a unique set of practical, physical and psychological tasks that need to be ‘experienced’ in order to be understood.

So what is the recommended approach to help instructors implement a realistic but safe CBRNe training environment?

Overcoming regulatory obstacles

While the spreading of chemical simulants can still occasionally be an option, strict environmental regulations generally make it unfeasible – and the use of any form of radiological source is almost always going to be unrealistic for all but the most high specialized of training facilities.

Simulant training also brings with it the problem of being very location-dependent, which restricts the ability to create scenarios in public settings or confined spaces. And there is the added difficulty of it not being able to be readily integrate simulant training with other conventional live training methods.

Wide-area instrumented training systems

When the highest degree of realism is required, a powerful modular exercise control system such as PlumeSIM enable instructors to take their CBRNe training exercises to an entirely new level. And it especially comes into its own in the context of counter terrorism scenarios, nuclear training drills and HazMat emergency exercises.

So what benefits does the PlumeSIM training system offer?

Portability – Plume-SIM is highly portable making it quick to set up and to use in any environment. The inclusion of a planning mode also means that instructors can easily prepare exercises on a laptop or PC without the need for any form of system hardware.

Realism – Students are equipped with simulators and GPS enabled players, to enable them to take part in large area exercises that can include sequential multi-threat releases or that integrate with third-party live training systems.

Instructor control – The instructor retains complete control of the exercise including the ability to decide the type, quantity, location and nature of the source.

Environment – Specific environmental conditions can also be easily defined by the user, including temperature and changes in wind direction.

Repeatability – The Plume-SIM’s exercise parameters can be saved so the identical scenario can be repeated as many times as required.

Real-time action -The trainees’ movements, progress and instrument usage can be monitored in real time from a central control station.

After action review – The recording of student activity in real-time provides useful after action review (AAR). This can be used to encourage discussions about the effectiveness of an exercise and to facilitate further improvements.

Data capture – All recorded exercise data can also be exported and emailed to external personnel for future analysis.

Pre-exercise capability – The table-top planning mode uses standard gamepad controllers which enables trainees to undertake pre-exercise practice to take place within the classroom environment. The exercise can also be recorded and analysed prior to heading for the live field training area.

Versatility – If environmental conditions preclude the ability to obtain or maintain continuous long-range radio communication then the scenario can be pre-loaded on the player unit for timed activation.

Compatibility – The Plume-SIM system is compatible with a wide variety of simulator equipment including the M4 JCAD-SIMCAMSIMAP2C-SIMAP4C-SIMRDS200-SIMEPD-Mk2-SIMAN/PDR-77-/VDR-2 and RDS100-SIM.

Room to grow – The modular system gives instructors the flexibility to expand their range of training equipment as and when their budgets allow.

Achieving the highest level of realism in CBRNe training is paramount – and assuring personnel safety will always be key.

A flexible, modular simulator-based training solution such as the PlumeSIM system can provide trainees with the opportunity to practice and perfect their response to a wide variety of highly-realistic simulated threats in a completely safe environment.


About the Author

Steven Pike is the Founder and Managing Director of Argon Electronics, a leader in the development and manufacture of Chemical, Biological, Radiological and Nuclear (CBRN) and hazardous material (HazMat) detector simulators. He is interested in liaising with CBRN professionals and detector manufacturers to develop training simulators as well as CBRN trainers and exercise planners to enhance their capability and improve the quality of CBRN and Hazmat training.

When Is It Too Late to Sue for Environmental Contamination? The Alberta Court of Appeal Rules

Written by Laura M. Gill, Stephanie Clark, and Justin Duguay, Bennett Jones LLP

On February 6, 2019, the Alberta Court of Appeal (ABCA) released its first ever decision on section 218 of the Environmental Protection and Enhancement Act (EPEA), which may extend limitation periods applicable to environmental contamination claims.

By a unanimous decision in Brookfield Residential (Alberta) LP (Carma Developers LP) v Imperial Oil Limited, 2019 ABCA 35 [Brookfield], the ABCA upheld a lower court decision where the judge refused to exercise his discretion under section 218 of the EPEA to extend the limitation period for an environmental contamination claim. Extending the limitation period would have likely been prejudicial to the defendant’s ability to maintain a defence to the claim, as the alleged cause of the environmental damage occurred over 60 years ago. We previously discussed the 2017 Court of Queen’s Bench decision in an earlier post, When is an Environmental Contamination Claim Too Old to Extend the Limitation Period?

Background

Brookfield Residential (Alberta) LP (Brookfield) brought a negligence claim in the Alberta Court of Queen’s Bench (ABQB) against Imperial Oil Limited (Imperial) for environmental contamination from an oil well. Imperial drilled and operated the well between 1949 and 1950, and disposed of it in either 1950 or 1954. Multiple owners operated the well between 1950 and 1957 and then used it for salt water disposal between 1958 and 1961, at which point the well was decommissioned and abandoned. After several additional transfers of ownership, the site was issued a reclamation certificate in 1968. Contamination requiring remediation was not discovered until 2010, when Brookfield was preparing the site for residential development.

Brookfield brought an application under section 218 of the EPEA to extend the limitation period, and Imperial cross-applied with a summary dismissal application, asserting that the limitation period had expired. Since it was clear that the ten-year ultimate limitation period under the Limitations Act had expired, Brookfield’s negligence claim was entirely dependent on an extension of the limitation period under section 218. The ABQB refused to extend the limitation period and summarily dismissed the action against Imperial. Brookfield appealed.

The appeal was dismissed. In its reasons, the ABCA provided guidance on three important aspects of section 218 applications: (i) procedure and timing; (ii) the impact of the passage of time on prejudice to the defendant; and (iii) policy considerations relevant to the fourth factor in section 218(3).

1. Applications Under Section 218 of the EPEA Should Be Decided Prior to Trial

The ABCA in Brookfield ruled that applications under section 218 of the EPEA should be decided prior to trial, overruling the two-part test in Lakeview Village Professional Centre Corporation v Suncor Energy Inc, 2016 ABQB 288 [Lakeview]. In Lakeview, the ABQB set out a two-part approach to section 218 applications where the court may make a preliminary determination on limitations and allow the action to proceed subject to a final determination on the merits of the limitations issue at trial. Lakeview became the leading case on the procedure for section 218 applications.

In overturning the Lakeview test, the ABCA found two problems with the approach of deferring the decision on extending limitation periods until trial. First, the Lakeview approach “is inconsistent with the wording of section 218, which provides that the limitation period can be extended ‘on application'”. Second, the approach defeats the whole purpose of limitation periods because it forces a defendant to go through the expense and inconvenience of a full trial on the merits for a determination on limitations, notwithstanding that a limitation period is intended to eliminate the distractions, expense, and risks of litigation after the prescribed time has passed.

2. The Passage of Time Increases the Likelihood of Prejudice to the Defendant

The ABCA affirmed the approach of balancing the four factors in section 218(3), which in this case revolved primarily around the third factor (prejudice to the defendant). The ABCA found that it was reasonable for the ABQB to infer prejudice from the passage of time, noting that this is the presumption behind statutes of limitation. The allegations in Brookfield’s claim occurred over 60 years ago, and as such, witnesses and documentary evidence were difficult to identify and were no longer available. The passage of time also made it difficult to establish the proper standard of care. The ABCA agreed that attempting to determine 1949 industry standards and the standard of care at that time would prejudice Imperial.

3. The Competing Policy Objectives of the Limitations Act and the EPEA

The ABCA also provided guidance on the fourth factor listed in section 218(3), which grants judicial discretion to consider “any other criteria the court considers to be relevant”. The ABCA found that policy considerations behind limitations statutes were relevant criteria that should be weighed. In particular, the ABCA noted the policy objectives of statutes of limitations that actions must be commenced within set periods so that defendants are protected from ancient obligations, disputes are resolved while evidence is still available, and claims are adjudicated based on the standards of conduct and liability in place at the time. However, on the other hand, the ABCA highlighted that the EPEA has a “polluter pays” objective where a polluter should not escape responsibility by the mere passage of time.

Implications

The ABCA’s decision in Brookfield changes the procedure for extending limitation periods in environmental contamination claims. Rather than waiting until trial, parties must bring section 218 applications early on. As a result, plaintiffs in contaminated sites claims should also carefully assess the impacts on defendants of the passage of time in making section 218 applications. Brookfield reinforces that a court will likely presume greater prejudice from a longer passage of time, especially if witnesses and evidence may be difficult to identify and the standard of care may be difficult to assess. Going forward, Brookfield suggests that the Court will take a practical approach to assessing prejudice against a defendant when deciding whether to extend limitation periods in contaminated site claims where the ultimate limitation period has passed.


This article has been republished with the permission of the authors. It was first published on the Bennett Jones website.

About the Authors

Laura Gill is called to the bar in Alberta and British Columbia and has a commercial litigation practice specializing in energy and natural resources, First Nations issues, and environmental matters. Laura advises clients on disputes in a wide range of corporate matters, including complex breach of contract claims and joint ventures.

Laura’s experience in the energy industry includes litigating disputes involving leases, right-of-way agreements, ownership stakes, royalties, gas supply contracts, farmout agreements, and CAPL operating agreements. Laura also acts on appeals and judicial review proceedings following decisions of regulatory bodies, in particular with respect to regulatory approvals for energy-related projects in Alberta and British Columbia.

Stephanie Clark has a general commercial litigation practice. Stephanie has assisted with matters before all levels of the Alberta court system. During law school, Stephanie held a student clerkship with the Honourable Mr. Justice Nicholas Kasirer at the Court of Appeal of Quebec, competed in the 2015 Jessup International Law Moot, and was awarded with the Borden Ladner Gervais Professional Excellence Award. Stephanie articled with the firm’s Calgary office prior to becoming an associate. 

Justin Duguay is an articling student at Bennett Jones.

With more oil to be shipped by rail, train derailments show enduring safety gaps

by Mark Winfield and Bruce Campbell, Faculty of Environmental Studies, York University, Canada

The recent runaway CP Rail train in the Rocky Mountains near Field, B.C., highlighted ongoing gaps in Canada’s railway safety regime, more than five years after the Lac-Mégantic rail disaster that killed 47 residents of the small Québec town.

The British Columbia crash resulted in the deaths of three railway workers and the derailment of 99 grain cars and two locomotives.

In the B.C. accident, the train involved had been parked for two hours on a steep slope without the application of hand brakes in addition to air brakes.

The practice of relying on air brakes to hold trains parked on slopes was permitted by both the company and by Transport Canada rules. Revised operating rules, adopted after the Lac-Mégantic disaster, had not required the application of hand brakes under these circumstances.

The latest accident was one of a rash of high-profile train derailments in Canada since the beginning of 2019. While none compares in magnitude with Lac-Mégantic, they evoke disturbing parallels to that tragedy. Although investigations are ongoing, what we do know raises questions about whether any lessons have in fact been learned from the 2013 disaster.

Now must apply hand brakes

Within days of the B.C. runaway, both CP Rail and Transport Canada mandated the application of hand brakes in addition to air brakes for trains parked on slopes. This after-the-fact measure parallels the action Transport Canada took days after Lac-Mégantic, prohibiting single-person crews, after having granted permission to Montréal Maine and Atlantic Railway to operate its massive oil trains through Eastern Québec with a lone operator.

Furthermore, like the Lac-Mégantic tragedy, existing mechanical problems with the locomotives involved reportedly played a role in the CP Rail derailment, raising questions about the adequacy of oversight with regard to equipment maintenance practices.

Like Lac-Mégantic, worker fatigue may have also played a role in the crash. Despite efforts within Transport Canada to force railways to better manage crew fatigue, railway companies have long resisted. Instead they have taken page out of the tobacco industry playbook by denying inconvenient scientific evidence as “emotional and deceptive rhetoric.”

The situation has prompted the Transportation Safety Board to put fatigue management on its watchlist of risky practices, stating that Transport Canada has been aware of the problem for many years but is continuing to drag its feet.

Oil-by-rail traffic explodes

The implications of the B.C. accident take on additional significance in light of the dramatic growth seen in oil-by-rail traffic in Canada over the past year. Export volumes reached a record 354,000 barrels per day in December 2018, with the vast majority of the oil going to refineries on the U.S. Gulf Coast and Midwest.

This development has not gone unnoticed by people living in communities across North America, who are concerned about the growing danger of another disastrous derailment.

The increase in traffic — now bolstered by the Alberta government’s plan to put another 120,000 barrels per day of crude oil on the rails by next year — is occurring at a time when the Transportation Safety Board reported a significant increase in “uncontrolled train movements” during 2014-17 compared to the average of the five years preceding the disaster.


Read more: Technology to prevent rail disasters is in our hands


This is despite the board’s Lac-Mégantic investigation report recommendation that Transport Canada implement additional measures to prevent runaway trains.

Two weeks after the B.C. crash, a CN train carrying crude oil derailed near St. Lazare, Man.; 37 tank cars left the tracks, punctured and partially spilled their contents. The cars were a retrofitted version of the TC-117 model tank car, developed after Lac-Mégantic, intended to prevent spills of dangerous goods. The train was travelling at 49 mph, just under the maximum allowable speed.

Budgets chopped

In the lead-up to the Lac-Mégantic disaster, the Harper government squeezed bothTransport Canada’s rail safety and transportation of dangerous goods oversight budgets. These budgets did not increase significantly after the disaster.

Justin Trudeau’s government pledged additional resources for rail safety oversight. However, Transport Canada’s plans for the coming years show safety budgets falling back to Harper-era levels. It remains to be seen whether these plans will be reversed in the upcoming federal budget.

Safety Management Systems-based approach remains the centrepiece of Canada’s railway safety system. That system been fraught with problems since it was introduced 17 years ago.

It continues to allow rail companies to, in effect, self-regulate, compromising safety when it conflicts with bottom-line priorities. Government officials claim there has been a major increase in the number of Transport Canada rail safety inspectors conducting unannounced, on-site inspections. But the inspectors’ union questions these claims.

If an under-resourced regulator, with a long history of deference to the industry, is unable to fulfil its first-and-foremost obligation to ensure the health and safety of its citizens, the lessons of Lac-Mégantic have still not been learned. The B.C. accident highlights that the window for history to repeat itself remains wide open.


This article is republished with permission. It was first published on The Conversation website.

About the Authors Authors

Mark Winfield is a Professor of Environmental Studies, York University, Canada

Bruce Campbell is an Adjunct professor, York University, Faculty of Environmental Studies, York University, Canada

Are New United States Regulations Coming for Accidental Releases into Air?

By Louis A. Ferreira, Willa B. Perlmutter, and Guy J. Thompson, Stoel Rives LLP

On February 4, 2019, a federal court ruled that the U.S. Chemical and Safety Hazard Board must issue regulations within one year that set forth reporting requirements for accidental releases of hazardous substances into the ambient air. This requirement has been part of the Board’s statutory mandate since its inception in 1990 pursuant to Section 112(r)(6)(C)(iii) of the Clean Air Act (“CAA”). Nevertheless, the Board has never issued any such regulations.

Four non-profit groups and one individual filed a one-count complaint against the Board, seeking declaratory relief and an injunction to compel the Board to promulgate reporting requirements as required by the CAA. Plaintiffs claimed that the Board had violated the Administrative Procedure Act by not issuing any regulations. Plaintiffs further asserted the lack of reporting requirements have impaired their respective abilities to collect information that would help prevent future releases and the harm caused from such releases.

The United States District Court for the District of Columbia agreed with the plaintiffs and ruled that the Board must issue regulations within one year. In reaching its decision, the Court rejected the Board’s defenses that the delay in promulgating regulations was reasonable given the Board’s limited resources, small staff size, and other required functions. “[I]f that is the case,” the Court said, “the solution to its resource constraints is not to ignore a congressional directive[,] [i]t is to return to Congress and ask for relief from the statutory requirement.” The case is Air Alliance Houston, et al. v. U.S. Chem. & Safety Hazard Investigation Bd., D.D.C., No. 17-cv-02608, February 4, 2019.

The Court’s decision appears to follow a similar one issued in August 2018 in which some of the same plaintiffs brought a complaint against the U.S. Environmental Protection Agency. In that case, the plaintiffs petitioned the D.C. Court of Appeals for review of the EPA’s decision to delay for 20 months the effective date of a rule designed to promote accident safety and enhance the emergency response requirements for chemical releases. The Court rejected all of EPA’s defenses justifying the delay in a strongly-worded opinion that held the agency strictly to the letter of the CAA. That case is Air Alliance Houston, et al. v. EPA, 906 F.3d 1049 (D.C. Cir. 2018).

The same directness is evident in this recent decision.

Ultimately, the practical effect of the ruling is not clear. There are already laws in place that require companies to report accidental releases to state and federal authorities. It is possible the Board will promulgate regulations that align with its current practice of deferring reporting requirements to other agencies. If the Board took that approach, there likely would not be a noticeable difference in reporting requirements from the current practice.

On the other hand, the two recent decisions discussed above suggest that a trend may be forming in which the courts are pushing back when the government steps off its clear statutory path.


This article has been republished with the permission of the authors. The original post of this article can be found on the Stoel Rivers LLP website.

About the Authors

Lou Ferreira is a senior partner with more than 27 years of complex trial experience.  His practice focuses on commercial litigation, insurance coverage and environmental, safety & health issues.  A seasoned litigator, Lou has significant experience in high-stakes litigation including successfully defending a class action filed against a utility by residents of a town in Washington asserting that the utility was liable for flooding as a result of the operations of its upstream dams.  Lou  successfully defended a port in Washington from a $20 million lawsuit brought by developers alleging breach of contract to develop a large mixed-use waterfront project on the Columbia River. 

Willa Perlmutter has more than 30 years of experience as a litigator, focusing for the last 20 on defending mine operators across all sectors of the industry in administrative enforcement proceedings brought by the Mine Safety and Health Administration (MSHA) for alleged violations of the Mine Act.  In addition, she regularly counsels clients on a broad range of issues that affect their mining operations, from personnel policies and actions to compliance with a broad range of federal statutes. Willa regularly defends companies and individuals facing investigations and formal legal proceedings for alleged safety and health violations under both the Federal Mine Safety and Health Act of 1977 and the Occupational Safety and Health Act of 1970, whether those arise out of a catastrophic event, such as an accident, or in the course of a regular inspection by MSHA or Occupational Safety and Health Administration (OSHA). She has successfully defended a number of mining companies in whistleblower cases brought under the Mine Act.

Guy Thompson is a litigator and advisor on a wide-range of insurance matters. His practice focuses on insurance coverage litigation, including natural resources/environmental insurance coverage, and a wide variety of risk management issues. Guy helps policyholders obtain the recovery they deserve from their insurers and has helped recover millions of dollars from insurance companies for his clients. Guy is skilled at getting insurance carriers to cooperate in paying claims and often secures settlements with insurers without the need for litigation. Recently, he helped recover over $1.65 million from multiple insurance carriers for a Portland company that was required to perform environmental cleanup by the Oregon Department of Environmental Quality.

How can a multi-gas detection simulator enhance emergency response?

Written by Steven Pike , Argon Electronics

The growth in global industry and manufacturing, together with the ever-present risk of terrorist threat, means emergency personnel are increasingly being required to respond to incidents where there is risk of exposure to explosive atmospheres, low or enriched oxygen, or the presence of lethal toxic vapours.

For response crews arriving on scene there are two essential questions to consider. Is the air safe enough to breathe? And are there any specific toxic gases present?

Gas detection is fundamental to emergency response – and multi-gas detectors are the ideal tools for serving the majority of first responders’ gas detection needs.

Ensuring that crews have access to the right air-monitoring equipment, and that they’re trained in how to use it, is essential for enabling them to make confident decisions in complex scenarios.

In this blog post we provide an overview of the most common types of air-monitoring equipment. And we explore how gas detection simulators can aid in the effectiveness of first response training. 

Portable multi-gas detectors come in a variety of styles and configurations, some with the ability to detect up to six gases at a time. So let’s first consider the four most common types:

Catalytic combustion sensors – in which a heated wire is used to detect a wide variety of flammable gases from natural gas leaks to gasoline spills. In catalytic combustion, power is applied to a special wire coil, in much the same way as a traditional light bulb. Any combustible gas that is exposed to the sensor will react on the wire surface and produce a display reading.

Electrochemical toxic gas sensors – which are used to detect the presence of toxic hazards. An electrochemical sensor is similar in design to a small battery except that the chemical component that is required to produce the electric current is not present in the sensor cell. As the target gas diffuses into the membrane of the sensor, this reacts with chemicals on the sensing electrode to produce an electrical current.

Infrared detectors – commonly used to detect gases that are less reactive and therefore cannot be detected using typical electrochemical cells (such as CO2 or hydrocarbons). Instead of relying on a chemical reaction, infrared sensors determine the amount of gas present by measuring how much light the specific gas absorbs.

Photoionization detectors (PID) – which are used to detect volatile industrial compounds (VOCs) such as methane which can be present during industrial spills. PIDs rely on the specific chemical properties of the VOCs, but instead of absorbing light they use a light source in the UV spectrum to ionize electrons off gas molecules.

Realistic multi-gas detection training

The last decade has seen an increasing demand for advanced training tools to create the highest levels of realism, to reinforce instruction and to enhance student learning.

The use of intelligent simulation technology for chemical warfare agent training is well established. And now that same pool of knowledge and expertise has been applied to training in multi-gas detection.

One such example is Argon Electronics’ Multi-Gas SIM – an App-based simulator that provides instructors with the ability to set up complex multi-gas training scenarios using an android phone.

The simulator is highly configurable which means instructors can set the number of gas sensors they they want their students to view and they can select the type of sensor (be it infrared, electrochemical, PID etc).

They can also program the alarm settings in accordance with the operational detectors in use – so as students move around the training environment, their display readings will adjust to simulate events such as a breached alarm.

The option of an instructor remote means that trainers can remotely monitor student readings and activity, to further stimulate discussion and reinforce knowledge.

For those wanting to implement large-scale releases, the multi-gas simulator can also be used with Argon’s PlumeSM system to provide an enhanced level of realism and a more focused training experience.

Realism, repeatability, safety and efficiency are all key to effective HazMat training.

Simulator detectors tools such as Argon’s Multi-Gas SIM promise to play an invaluable role in aiding trainees’ understanding of gas detection to ensure the right decisions are made, however challenging the scenario.

About the Author

Steven Pike is the Founder and Managing Director of Argon Electronics, a leader in the development and manufacture of Chemical, Biological, Radiological and Nuclear (CBRN) and hazardous material (HazMat) detector simulators. He is interested in liaising with CBRN professionals and detector manufacturers to develop training simulators as well as CBRN trainers and exercise planners to enhance their capability and improve the quality of CBRN and Hazmat training.

The Uses of 3D Modeling Technology in the Environmental Remediation Industry

By: Matt Lyter (Senior Staff Geologist at St-John-Mitterhauser & Associates, A Terracon Company) and Jim Depa (Senior Project Manager/3D Visualization Manager at St-John-Mitterhauser & Associates, A Terracon Company)

Three-dimensional (3D) modeling technology is used by geologists and engineers in the economic and infrastructure industries to help organize and visualize large amounts of data collected from fieldwork investigations.  In the oil and gas industry, petroleum geologists use 3D models to visualize complex geologic features in the subsurface in order to find structural traps for oil and natural gas reserves.  In the construction industry, engineers use 3D maps and models to help predict the mechanics of the soil and the strength of bedrock for construction projects.  In the mining industry, economic geologists use high resolution 3D models to estimate the value of naturally occurring ore deposits, like gold, copper, and platinum, in a practice known as resource modeling.

All of the models are built in almost the same exact way: 1) By collecting and analyzing soil samples and/or rock cores; 2) Using a computer program to statistically analyze the resulting data to create hundreds or even thousands of new (or inferred) data points; and 3) Visualizing the actual and inferred data to create a detailed picture of the ground or subsurface in three dimensions.  These models can be used in the economic and infrastructure industries to help predict the best locations to install an oil or gas well, predict the size of an oil or natural gas reserve, assist in the design of a road, tunnel, or landfill, calculate the amount of overburden material needing to be excavated, or help to predict the economic viability of a subsurface exploration project.

However, because of the significant amount of computing power needed to create the models, usage of the technology by regulatory driven industries has been limited.  But continuing technological advancements have recently made 3D modeling technology more accessible and affordable for these regulatory driven industries, including the environmental investigation and remediation industry.  Complex 3D models that previously may have taken several days to create using expensive high-end computers, can now be made in several hours (or even minutes) using the technology present in most commercially available desktops.  Because of these advancements, subsurface contamination caused by chemical spills can be visualized and modeled in 3D by environmental geologists at a reasonable price and even in near real-time.

3D Models of Soil Contamination

Some of the applications of 3D modeling technology in the environmental investigation and remediation industry are only just beginning to be utilized, but they have already helped to: 1) Identify data gaps from subsurface investigations, 2) Describe and depict the relationship between the geologic setting of a site and underground migration of a contaminant, and 3) Provide a more accurate estimate of the amount of contamination in the subsurface.  The models have also helped contractors design more efficient remediation systems, assisted governmental regulators in decision making, and aided the legal industry by explaining complex geologic concepts to the non-scientific community.  This is especially true when short animations are created using the models, which can show the data at multiple angles and perspectives – revealing complexities in the subsurface that static two-dimensional images never could.

The consultants at St. John-Mittelhauser and Associates, a Terracon Company (SMA), have used 3D modeling technology on dozens of sites across the United States, most recently, at a large-scale environmental remediation project in the Midwestern United States.  Contamination from spills of trichloroethylene (TCE), a once widely used metal degreaser, were identified at a former auto parts manufacturer during a routine Phase 2 investigation.  Dozens of soil samples were collected and analyzed in order to define the extent of contamination, and once completed, traditional 2D maps and a series of cross-sections were created.  One of the cross sections is shown in the image below:

Cross-section of soil contamination

Traditional Cross-section Showing Geologic Units and Soil Sample Results

The maps and cross sections were presented to remediation contractors with the purpose of designing a remediation system precisely based on treating only the extent of the contaminated soil.  The lowest bid received was for $4.2 million dollars (USD), however, it was evident to SMA that all of the proposed designs failed to take into account the complexity of the subsurface contamination.  Specifically, large portions of the Site, which were not contaminated, were being proposed to be treated.  Therefore, using a 3D dimensional modeling program, SMA visualized the soil sample locations, modeled the extent of the contaminated soil in 3D, and created an animation showing the model at multiple perspectives and angles, at a cost of $12,000 (USD).  A screenshot of the model is provided below:

3D dimensional modeling program results

3D Side View of TCE Contamination in Soil (15 PPM in Green, 250 PPM in Orange)

The project was resubmitted to the remediation contractors with the 3D models and animation included, resulting in a guaranteed fixed-price bid of $3.1 million dollars – a cost savings of over $1.1 million dollars for the client. Additionally, an animation showing both the remedial design plan and confirmatory sampling plan was created and presented to the United States Environmental Protection Agency (the regulatory agency reviewing the project) and was approved without any modifications.  To date, the remediation system has removed over 4,200 pounds of TCE from the subsurface and completion of the project is expected in 2019.  A short animation of the 3D model can be viewed on YouTube.


3D Models Showing PCE Contamination in Soil

The 3D modeling software has also been used to help determine the most cost-effective solution for other remediation projects, and has been able to identify (and clearly present) the sources of chemical spills.  The following link is an animation showing three case studies involving spills of perchloroethene (a common industrial solvent) at a chemical storage facility, ink manufacturer, and former dry cleaner: https://www.youtube.com/watch?v=0IlN_TIXkGk

The most cost-effective remediation option was different for each site and was based on the magnitude of the contamination, maximum depth of contaminated soil, geologic setting, and the 3D modeled extent of contamination.  Specifically, the contamination at the chemical storage facility was treated using electrical resistance heating technology, chemical oxidants were used to treat the soils at the ink manufacturer, and soil vapor extraction technology was used at the dry cleaner.

However, several barriers remain which prevent the wide-spread use of 3D modeling technology.  The various modeling programs can cost upwards of $20,000, as well as yearly fees for software maintenance.  There are also costs to organize large datasets, build the necessary files, and create the models and animations.  It also must be noted that the 3D models are only statistical predictions of site conditions based on the available data, and the accuracy of the models is wholly dependent on the quantity, and more importantly, the quality of the data.  Even so, 3D modeling technology has proven to play an important role in the environmental remediation industry by helping project managers to understand their sites more thoroughly.  It has also provided a way to disseminate large amounts information to contractors, regulators, and the general public. But, perhaps, most-importantly, it has saved money for clients.


About the Authors

Matt Lyter (Senior Staff Geologist at St-John-Mitterhauser & Associate, A Terracon Company) provides clients with a wide range of environmental consulting services (Environmental litigation support; acquisition and transaction support; site specific risk assessment, etc.), conventional and state-of-the-art environmental Investigation services, and traditional to advanced environmental remediation services.

Jim Depa (Senior Project Manager/3D Visualization Manager at St-John-Mitterhauser & Associate, A Terracon Company) has over 12 years of experience as a field geologist, project manager, and 3D modeler.  He is well-versed with a variety of computer programs including: C-Tech’s Earth Volumetric Studio (EVS), Esri’s ArcGIS, AQTESOLV, MAROS, Power Director 16, and Earthsoft’s EQuIS

Bioremediation: Global Markets and Technologies to 2023

A report issued by BCC Research provides an overview of the global markets and technologies of the bioremediation industry. The report predicts that the global bioremediation market should grow from $91.0 billion in 2018 to $186.3 billion by 2023, increasing at a compound annual growth rate (CAGR) of 15.4% from 2018 through 2023.

One of the finding of the report is that the application of bioremediation technology in the water bodies sector held the largest market share in 2017, and it is expected to remain the market leader throughout the forecast period.

The report predicts an ever-increasing use of bioremediation techniques for treating sewage, lakes, rivers and streams, ponds and aqua culture is anticipated to create huge growth opportunities for the market in the coming years. In recent years, however, the rise in the agriculture industries has augmented the growth of hazardous pollutants in the environment, and thus the application of bioremediation methods in the agricultural sector is expected to be the fastest-growing segment.


Redox zones of a typical contaminant plume (Source: Parsons 2004)

The report breaks down and analyzes the bioremediation market into three categories:

  • By type: In situ and ex situ bioremediation.
  • By application: Water bodies, mining, oil and gas, agriculture, automotive and other industries.
  • By region: North America is segmented into the U.S., Canada and Mexico; Europe is segmented into the U.K., Germany, France, Russia and Rest of Europe; the Asia-Pacific region is segmented into Japan, India, China and Rest of Asia-Pacific; and the Rest of the World (ROW) covers Latin America, Middle East and Africa.

The report provides estimated values used are based on manufacturers’ total revenues. Projected and forecast revenue values are in constant U.S. dollars unadjusted for inflation.

This report also includes a patent analysis and a listing of company profiles for key players in the bioremediation market.

Similar Reports

In 2014, a team of United Kingdom researchers at University of Nottingham and Heriot-Watt University issued the results of a global survey on the use of bioremediation technologies for addressing environmental pollution problems. The findings of the survey were quite interesting.

Preferred vs. Actual Treatment Method

One of the findings of the UK survey was the difference between the preferred vs. actual treatment method. More than half of respondents (51%) stated that they would prefer to use environmentally friendly approaches including microbial remediation (35%) and phytoremediation (16%). However, historical information suggests the opposite has actually been the case. Considering the relative low cost and low energy requirements of bioremediation technologies, the gulf between aspiration and practice might be due to various factors involving the risk-averse nature of the contaminated-land industry, or difficulties in project design. The latter include identifying appropriate organisms for removing specified contaminants, optimizing environmental conditions for their action, ascertaining extents of eventual clean-up, and the incomplete understanding of all the mechanisms and processes involved. These lead to difficulties in modeling, simulating and/or controlling these processes for improved outcomes.

Application of Bioremediation Techniques

The Figure below compares the broad bioremediation methods being employed within industry according to the 2014 survey, namely monitored natural attenuation (MNA), bio-augmentation and bio-stimulation. The use of low-cost in situ technologies (like MNA) featured quite prominently, particularly in North America and Europe, where it accounts for over 60% of the bioremediation methods being used. This finding points to a strong concern within the developed countries for better maintenance of ecological balance and preventing a disruption of naturally occurring populations.

MNA has been shown to require 1) elaborate modeling, 2) evaluation of contaminant degradation rates and pathways, and 3) a prediction of contaminant concentrations at migration distances and time points downstream of exposure points. This is to determine which natural processes will reduce contaminant concentrations below risk levels before potential courses of exposure are completed, and to confirm that degradation is proceeding at rates consistent with clean-up objectives. These results appear to suggest that regions which employ computational and modeling resources are better able to use low-cost bioremediation technologies like MNA, whereas the others tend to use the more traditional and less cost-effective technologies. In all the continents, researchers were found to favor the use of bio-stimulation methods. Less disruption of ecological balance is apparently a global concern.

Background on Bioremediation

Bioremediation is a method that uses naturally occurring microorganisms such as bacteria, fungi and yeast to degrade or break down hazardous substances into non-toxic or less-toxic substances.Microorganisms eat and digest organic substances for energy and nutrients.

There are certain microorganisms that can dissolve organic substances such as solvents or fuels that are hazardous to the environment.These microorganisms degrade the organic contaminants into less-toxic products, mainly water and carbon dioxide.

The microorganisms must be healthy and active for this to occur.

Bioremediation technology helps microorganisms grow and boosts microbial population by generating optimum environmental conditions. The particular bioremediation technology utilized is determined by various factors, including the site conditions, the presence of type of microorganisms, and the toxicity and quantity of contaminant chemicals.

Bioremediation takes place under anaerobic and aerobic conditions.In the case of aerobic conditions, microorganisms utilize the amount of oxygen present in atmosphere to function.

With a sufficient amount of oxygen, microorganisms transform organic contaminants into water and carbon dioxide. Anaerobic conditions help biological activity in which oxygen is not present so that the microorganisms degrade chemical compounds present in the soil to release the required amount of energy.

Factors of influence in bioremediation processes

Bioremediation technology is used to clean up contaminated water and soil.There are two main types of bioremediation: in situ and ex situ.

The in situ bioremediation process treats the contaminated groundwater or soil in the location where it is found. The ex situ process requires the pumping of groundwater or the excavation of contaminated soil before it can be treated.

In situ bioremediation type is typically segmented as phytoremediation, bioventing, bioleaching, bioslurping, biostimulation and bioaugmentation. The ex situ bioremediation type is typically segmented as composting, controlled solid-phase treatment and slurry-phase biological treatment.

Biodegradation is a cost-effective natural process that is useful for the treatment of organic wastes.The extent of biodegradation is greatly dependent upon the initial concentrations and toxicity of the contaminants, the properties of the contaminated soil, their biodegradability and the specific treatmentsystem selected.

In biodegradation treatment, the targeted contaminants are semi-volatile and nonhalogenated volatile organics and fuels. The benefits of bioremediation, however, are limited at sites with highly chlorinated organics and high concentrations of metals, as they may be harmful to the microorganisms.

https://www.researchandmarkets.com/publication/mkvz6uj/4752244

Provincial Environmental Obligations Prevail Over Federal Bankruptcy Laws – Supreme Court of Canada

by Paul Manning, Manning Environmental Law

Recently, the Supreme Court of Canada released its decision in the case of Orphan Well Association, et al. v. Grant Thornton Limited, et al.Orphan Well Association, et al. v. Grant Thornton Limited, et al. 

The decision writes another chapter in the long running saga of whether a company’s environmental regulatory obligations survive bankruptcy and, in particular, whether the company’s trustee in bankruptcy can disclaim an asset so as to avoid environmental liability. (See our blog post The Non-Polluter Pays: Creditor Roulette and Director Liability)

The Supreme Court has now decided in Orphan Well that, after going bankrupt, an oil and gas company must  fulfill provincial environmental obligations before paying its creditors.

Background

Redwater was an Alberta oil and gas company, which owned over a hundred wells, pipelines, and facilities when it went bankrupt in 2015.

Alberta has provincial laws requiring oil and gas companies to obtain a licence to operate. As part of the licence, companies have to “abandon” wells, pipelines, and facilities when they are done. This means permanently taking these structures down. They also have to “reclaim” the land by cleaning it up. Companies cannot transfer licences without permission from the Alberta Energy Regulator (AER), which they won’t receive if they haven’t met their responsibilities.

Most of Redwater’s wells were dry when it went bankrupt. Dismantling the sites and restoring the land would have cost millions of dollars more than they were worth. To avoid paying those costs, the the trustee in Orphan Well decided to disclaim (i.e. not to take responsibility for) the redundant wells and sites under the BIA. The trustee wanted to sell the productive sites to pay Redwater’s creditors.

The AER said that this wasn’t allowed under the BIA or provincial law and ordered the trustee to dismantle the disowned sites. The trustee argued that even if the AER was correct, the provincial abandonment orders were only provable claims under the BIA. In this case, this meant the money would first go to pay Redwater’s creditors.

The Supreme  Court’s Decision

There were two main legal issues before the Supreme Court. The first was whether the BIA allowed the trustee disclaim the sites it didn’t want take responsibility for. The second was whether the provincial orders to remove structures from the land were provable claims under the BIA. If they were, that would mean the payment order set up in the BIA applied. Only money left, if any, after those payments were made, could be used to pay for taking the sites down.

The trial judge had ruled that the trustee was allowed to disclaim the disowned sites and the abandonment costs were only provable claims in the bankruptcy. The majority of judges at the Alberta Court of Appeal hearing had agreed.

The majority of judges at the Supreme Court disagreed. It ruled that the trustee could not disclaim  the disowned sites. It said the BIA was meant to protect trustees from having to pay for a bankrupt estate’s environmental claims with their own money. It did not mean Redwater’s estate could avoid its environmental obligations.

The majority also said the abandonment costs were not “provable claims”. These costs weren’t debts requiring payments; they were duties to the public and nearby landowners. This put the abandonment costs outside the BIA’s payment order scheme and as such, the majority ruled, there was no conflict between the federal and provincial laws.

(The minority of judges at the Supreme Court disagreed, arguing that there was a genuine conflict between the federal and provincial laws and the BIA being the federal law should prevail over the provincial regulations. Where a valid provincial law conflicts with a valid federal law, the federal law will normally prevail under the constitutional law “doctrine of paramountcy.”)

As the trustee had already sold or given up all of Redwater’s assets, the money from the sales was held “in trust” by the court during the lawsuit. This money must now be used to abandon and reclaim the land before anything is paid to any of Redwater’s creditors.

Click here for the full decision of the Supreme Court of Canada in Orphan Well.

_________________________________________________________________

Manning Environmental Law is a Canadian law firm based in Toronto, Ontario. Our practice is focused on environmental law, energy law and aboriginal law. 

Paul Manning is a certified specialist in environmental law. He has been named as one of the World’s Leading Environmental Lawyers and one of the World’s Leading Climate Change Lawyers by Who’s Who Legal. This article is only as a general guide and is not legal advice.

Cost of Nuclear Waste Clean-up in the U.S. estimated at $377 Billion

A new report by the United States General Accounting Office (GAO) estimates the total cleanup cost for the radioactive contamination incurred by developing and producing nuclear weapons in the United States at a staggering $377 billion (USD), a number that jumped by more than $100 billion in just one year.

The United States Department of Energy (DoE) Office of Environmental Management (EM) is responsible for cleaning up radioactive and hazardous waste left over from nuclear weapons production and energy research at DoE facilities. The $377 billion estimate largely reflects estimates of future costs to clean up legacy radioactive tank waste and contaminated facilities and soil. 

The U.S. GAO found that EM’s liability will likely continue to grow, in part because the costs of some future work are not yet included in the estimated liability. For example, EM’s liability does not include more than $2.3 billion in costs associated with 45 contaminated facilities that will likely be transferred to EM from other DOE programs in the future.

In 1967 at the height of the U.S.–Soviet nuclear arms race, the U.S. nuclear stockpile totaled 31,255 weapons of all types. Today, that number stands at just 6,550. Although the U.S. has deactivated and destroyed 25,000 nuclear weapons, their legacy is still very much alive.

Nuclear weapons were developed and produced at more than one hundred sites during the Cold War. Cleanup began in 1989, and EM has completed cleanup at 91 of 107 nuclear sites, Still, according to the GAO, “but 16 remain, some of which are the most challenging to address.” 

EM relies primarily on individual sites to locally negotiate cleanup activities and establish priorities. GAO’s analysis of DOE documents identified instances of decisions involving billions of dollars where such an approach did not always balance overall risks and costs. For example, two EM sites had plans to treat similar radioactive tank waste differently, and the costs at one site—Hanford—may be tens of billions more than those at the other site. 

Each of the 16 cleanup sites sets its own priorities, which makes it hard to ensure that the greatest health and environmental risks are addressed first.
This is not consistent with recommendations by GAO and others over the last two decades that EM develop national priorities to balance risks and costs across and within its sites. 

By far the most expensive site to clean up is the Hanford site, which manufactured nuclear material for use in nuclear weapons during the Cold War. In 2017, the DoE estimated site cleanup costs at $141 billion.

Environmental liabilities are high risk because they have been growing for the past 20 years and will likely keep increasing.

EM has not developed a program-wide strategy that determines priority sites. Instead, it continues to prioritize and fund cleanup activities by individual site. Without a strategy that sets national priorities, EM lacks assurance that it is making the most cost-effective cleanup decisions across its sites.

The GAO is made three recommendations to DOE: (1) develop a program-wide strategy that outlines how it will balance risks and costs across sites; (2) submit its mandated annual cleanup report that meets all requirements; and (3) disclose the funding needed to meet all scheduled milestones called for in compliance agreements, either in required annual reports or other supplemental budget materials.

Exxon Valdez Oil Spill – Lessons learned 30 years after the event

As reported in the Fairbanks Daily News-Miner, there are still lessons to be learned from the Exxon Valdez oil spill that occurred on March 24th, 1989.

A recent report issued by the United States Government Accountability Office (U.S. GAO) found that some organizations involved in environmental cleanup, restoration and research weren’t talking to each other during the Exxon Valdez Oil Spill or the Deepwater Horizon oil spill that occurred in 2010. In fact, some agencies weren’t even aware that the other existed.

The U.S. Congress, reacting to the Exxon Valdez spill, created the Interagency Coordinating Committee on Oil Pollution Research as part of the Oil Pollution Act of 1990. The committee’s purpose is to “coordinate oil pollution research among federal agencies and with relevant external entities,” according to the GAO. The committee, which has representatives from 15 agencies, is expected to coordinate with federal-state trustee councils created to manage restoration funds obtained through legal settlements.

GAO investigators found, however, that “the committee does not coordinate with the trustee councils and some were not aware that the interagency committee existed.”

Although three decades have passed since oil soiled the surface of Prince William Sound and rolled onto its shores, evidence of the spill remains. GAO staff visited the spill site in May of last year “and observed the excavation of three pits that revealed lingering oil roughly 6 inches below the surface of the beach …” Restoration is largely complete in Prince William Sound, but some work continues and research will continue for decades, the GAO report notes.

Background: Exxon Valdez Spill and Clean-up

As reported in History.com, The Exxon Valdez oil spill was a man-made disaster that occurred when Exxon Valdez, an oil tanker owned by the Exxon Shipping Company, spilled 41 million litres of crude oil into Alaska’s Prince William Sound on March 24, 1989. It was the worst oil spill in U.S. history until the Deepwater Horizon oil spill in 2010. The Exxon Valdez oil slick covered 2,000 kilometres of coastline and killed hundreds of thousands of seabirds, otters, seals and whales.

Exxon payed about $2 billion in cleanup costs and $1.8 billion for habitat restoration and personal damages related to the spill.

Cleanup workers skimmed oil from the water’s surface, sprayed oil dispersant chemicals in the water and on shore, washed oiled beaches with hot water and rescued and cleaned animals trapped in oil.

Environmental officials purposefully left some areas of shoreline untreated so they could study the effect of cleanup measures, some of which were unproven at the time. They later found that aggressive washing with high-pressure, hot water hoses was effective in removing oil, but did even more ecological damage by killing the remaining plants and animals in the process. Nearly 30 years later, pockets of crude oil remain in some locations.

Lessons Learned

A 2001 study found oil contamination remaining at more than half of the 91 beach sites tested in Prince William Sound.

The spill had killed an estimated 40 percent of all sea otters living in the Sound. The sea otter population didn’t recover to its pre-spill levels until 2014, twenty-five years after the spill.

Stocks of herring, once a lucrative source of income for Prince William Sound fisherman, have never fully rebounded.

In the wake of the Exxon Valdez oil spill, the U.S. Congress passed the Oil Pollution Act of 1990. The Oil Pollution Act of 1990 increased penalties for companies responsible for oil spills and required that all oil tankers in United States waters have a double hull. The Oil Pollution Act of 1990 (OPA), which was enacted after the Exxon Valdez spill in 1989, established the Interagency Coordinating Committee on Oil Pollution Research (interagency committee) to coordinate oil pollution research among federal agencies and with relevant external entities, among other things.

The U.S. GAO recommends, among other things, that the interagency committee coordinate with the trustee councils to support their work and research needs.