Wednesday, August 10, 2011

Record Electricity Demand in Texas Creates Battling Headlines about Wind Power

Last Wednesday, August 3rd, saw a new record set in the ERCOT electricity service area. ERCOT wasted no time in putting some data from these events online.

The American Wind Energy Association (AWEA), put up a blog post titled Wind helps meet new Texas record for electricity demand. Now, that's something we would expect, except for this part, referring to the fact that the contribution from wind power in the area was 2 GW at the time of record electricity use:
The 2,000 MW were more than double the 800 MW that ERCOT counts on from wind during periods of peak summer demand for its long-term planning purposes, and enough to power about 400,000 homes under the very high electricity demand conditions seen yesterday.
System operators officially say that they count on 8% of the rated output of wind power to be help meet demand. That is to say that for every wind farm built, 0.08 times the rated maximum output of the wind farm is that much less capacity that has to be covered by fossil fuel plants. Since 800 MW is the 8% mark, 2,000 MW is 20%, which is the capacity factor the combined wind farms in the area (most of Texas) achieved during the peak hour. Here is a graph for the entire day from ERCOT with the peak hour circled and a line drawn for the 8% of total output:

So, yes, the output during peak demand was higher than the level said to be "reliable", but it had dropped below the 8% level just 5 hours early... that same day. Wind output was at a low around 11am to 12pm and the peak hour was 4-5pm. It's hard to say that AWEA wasn't setting itself up for some backlash. Of course I'm not the only one to notice this. The Capacity Factor blog has a bit of a sharper tongue, writing that Texas grid declares Level 1 Emergency as ten thousand megawatts of wind power stands paralyzed. The Capacity Factor correctly points out that all nuclear plants were operating at full capacity during the peak hour, but commentators at the AWEA blog also correctly point out that thermal plants have a lower thermal efficiency when the temperature is the highest, like on August 3rd.

The Real Story

The reality is that system operators don't absolutely count on 8% of the capacity of wind farms. Some days it dips below that level and some days it doesn't. It is also not the case that system operators "count" on any given plant in order to meet demand. System reliability is based on a probabilistic assessment of all possible failures. The situation is slightly different for wind power because a sudden reduction in output isn't the result of a failure, but a change in the weather pattern. Nuclear plants are some of the most reliable units on the grid, but a loss of a nuclear unit is often the most restricting contingency for planners because they are the largest units.

It is true that wind power increases the grid management challenge, and the most obvious reason is simply due to the fact that other units have to respond to the difference between demand and what wind power delivers. From 12pm to 4pm on August 3rd both demand and wind output was increasing. That's a great situation, because that's less slack that other units have to quickly pick up. These two, wind output and power demand, however, are in no way guaranteed to be correlated and there are times when demand is going up while wind output is going down, and those times are planned for by the system operators as well.

The real story is also that wind power in Texas matters a great deal at this time, and it is one of the critical proving grounds for grid integration of large scale wind. Take a look at the history of the installation of electric capacity to the grid in the United States, via the EIA Today in Energy.

That green sliver is wind capacity added. Roughly 30-35% of the green area is actual energy generation capability added (compared to about 90% for the pink, nuclear sliver), and 8% of the green sliver is reliable capacity added... well, sort of... it's complicated actually. Either way, it's still energy that doesn't release Carbon into the atmosphere. This is why it is neither the case that wind power is bringing the grid in Texas to its knees or aiding in maintaining reliability. It helps meet demand with clean energy, and that's what matters.

Update: I put some more info up on my auxiliary blog.

Saturday, August 6, 2011

Lessons Learned from Fukushima Daiichi: What the MIT Report Said (and Didn't Say)

The questions that remain from Fukushima Daiichi, as they apply to today's and tomorrow's reactors, could be a very good gauge for the winds of change in the nuclear industry. A lessons learned report from MIT CANES came out recently (hat tip to Steve Darden), and the material is just too important to pass up. It is a great report for anyone wanting to get a grasp on the important questions that stem from the disaster. We have covered the recent IAEA perspective here on The Neutron Economy, which is a great read about the questions regarding what happened. I want to take a moment here to recap and address the enumerated points from the MIT CANES report about "lessons learned".

For some previous attempts at lessons learned, see guests posts from experts at bravenewclimate or Idaho Samazat for just two examples. With that said, here are the points from the MIT CANES report. I'll try to offer a dumbed down version as much as possible for all of these, and of course, plenty of my own spin on things.

Emergency Power following Beyond-Design-Basis External Events
Simplified: How to assure the plant has power to keep the pumps running after a major natural disaster or terrorist attack

This concern is about the Station Blackout (SBO) event. Of course it was the earthquake-tsunami that caused the Daiichi meltdown, but it did it so by causing a SBO, so the relevance to nuclear safety is obvious. Even if we can't prevent tsunamis, we can prevent prolonged SBOs. MIT gives the "place the emergency diesel units on a hill" kind of recommendation, and this is not the first time I've read this. Water-proofing is another possibility, and this was cited in Japanese media as being one reason that the Tokai nuclear power plant did not worsen, but this also applies for the seawater pumps. There are also recommendations to have transportable generators that could be brought. The question on my mind is how much this recommendation does or doesn't overlap with what we already have in place. For the US, there is a good chance, in fact, that the public simply doesn't know about what we have ready to respond to a nuclear emergency with, since many such measures were developed for post-9/11 security, and an anti-terrorism safety feature works best when the public doesn't know the details. The MIT report also mentions passive safety features and asks the question of whether or not new plants should be required to have a mix of passive and active safety features. This question would be most relevant to designs like the EPR, and the performance of a design like the EPR (meaning fewer passive features) during SBO might be indirectly coming under fire here.

Emergency Response to Beyond-Design-Basis External Events
Simplified: Emergency response sums it up pretty well

Not only are things that did happen relevant to the post-Fukushima questions, but what could have happened but didn't is also fair game, this report mentions the possibility of staffing problems due to direct fatalities from an earthquake or tsunami. Additionally, problems in determining the evacuation zone and communicating with the public and with other governments are mentioned. It is suggested that organizations like INPO (an industry group in the US) or an international group could form rapid-response teams for nuclear emergencies. It also notes that there is a tradeoff between evacuation area and stress imposed on the local population, an difficult call that I've written about myself. There is also mention of the need to communicate radiation risk to the public. I would just add, be prepared for outrage. Lately I've been following the Uncanny Terrain documentary effort, which is fantastic on one hand to tell the human story of these tradeoffs between radiation risk and livelihood, but it also shows that people are upset based on social justice issues. This is one of those things you just can't "fix".

Hydrogen Management
Simplified: How to keep Hydrogen from exploding in an accident

The Hydrogen explosions at Fukushima Daiichi are obviously something to address. The MIT report mentions venting via "strong pipes", connecting the pool areas more directly to the plant stack, more hydrogen recombiners and igniters (specifically in the upper building), catalytic recombiners in the ventilation system inside containment (not done as of yet), research into hydrogen flares as a solution for large hydrogen buildup, and use of something other that Zircaloy for cladding. I have previous read about the strong pipes (or hard pipes) topic and in the Idaho Samazat post, using non-Zircaloy cladding materials was mentioned. In my opinion, Changing the cladding material could be very expensive and have a number of unintended consequences, for instance, reduced neutron economy. There is no technical reason the switch could not be made, but it would probably still require significant research and development lead time, if any nation decides to go that route in the first place.

Note: The focus of this point is on the need to vent the egg-shape primary containment

The report suggests directly that the (primary, or inner) containment should be vented to stack instead of to the secondary containment pools, which led to the hydrogen explosions. That would come with the other solutions mentioned in the hydrogen management point, including new catalytic recombining systems. Use of passive containment cooling is mentioned, but no word on if this has any relevance to existing reactor designs.

Spent Fuel Pools
Note: BWR spent fuel pools are unique and several recommendations are specific to those

It is believed that the spent fuel pools accounted for a lot, if not the majority, of the radioactive releases, so obviously this is one of he most major lessons learned items. The MIT report gives prompt movement of spent fuel to dry storage as an option... with many provisos. There is a safety benefit to doing this, although there are fundamental limitations associated with it, which is a debate the authors of this blog have been through several times. The MIT report notes the following shortcomings of dry storage
  • The casks must be secured so they do not tip during an earthquake
  • A break in a cask will result in direct and unmitigated release to atmosphere
  • The decay heat in spent fuel pools is mostly unaffected by using more dry cask storage because only the old fuel can be moved which don't contribute much to the heat
Very good points. Put together, this means that dry storage may or may not be preferable to other options, which the MIT report identifies as being on-site spent fuel pools (I'm guessing this means not close to the reactor) and centralized interim storage. The interim storage option is being talked about today as a result of the Blue Ribbon Commission on spent fuel management, by the way, although, for reasons that have limited overlap with these post-Fukushima concerns.

Some of the other recommendations are also very worthy of mention. Passive cooling of the spent fuel pools, and the policy of moving fuel out during outage are mentioned as things that need to be looked at for current plants. For the future, it is suggested that spent fuel could have its own containment, regional spent fuel storage facilities are relevant (with a hat-tip to Rokkasho, which is unfinished business), and a national repository could be created.

Plant Siting and Site Layout
Simplified: How do you keep an accident at one reactor from affecting other units?

This report notes the Fukushima Daiichi plant's "compact layout", and of course, cross-unit complications. The tsunami disabled a staggering 13 emergency diesel generators. Also, the Daini and Onagawa plants were at risk, reflecting concerns about regional clustering of nuclear plants. The phrase to remember here is common cause failure, and the report also coins the term unit-to-unit contagion.

A major conundrum that still remains is how to deal with earthquake risk. Saying that we should avoid major fault lines is obvious, but sites in places like Japan and Taiwan are always vulnerable to seismic risk. How do we regulate for this? This is a hard question. The report suggests to develop criteria with which a permissible number of units at a site could be evaluated.

Things not mentioned in the report

As someone who's followed this event and some of the technical aspects of it, I already had my own list of "lessons learned", but almost everything was covered by this report. Of course, there are a few things I had down which were not covered, as well as items other people have mentioned that weren't in there.
  • Beyond Design Basis - I had been wondering if the way we look at nuclear safety will be changed due to the sheer magnitude of the event beyond the tsunami magnitude that was planned for. Risk informed analysis and regulation seeks to not place an upper limit on events that can be tolerated (such as a x.x earthquake), and instead seeks to set acceptable accident frequency. I think Fukushima Daiichi strengthens the impetus for this perspective, since if nature is going to exceed the limits you set anyway, then you might as well quantify the risks in a more appropriate manner. Then again, if the problem with Fukushima Daiichi is that the tsunami risk was underestimated, then changing the way we look at risks will not fix that. That is a different type of error.
  • A comprehensive approach to flood events - A great read for past presidents that could have helped at Fukushima Daiichi, if it had been applied, is the 1999 flood of a reactor in France. Not to mention, there are plants in the US like Browns Ferry that have both a similar design to Fukushima and a flood risk. Don't forget about Fort Calhoun either, which the media was all over, predicting a catastrophic flood event. Since flooding keeps coming up as a concern, I had been wondering if this could be a sort of Achilles heel that needs more attention. The Japanese plants have certainly drafted plenty of flood mitigation plans in the wake of the Daiichi disaster, could a worldwide revamp of flood safety be in order? Looking at the lessons learned from the 1999 French flood, it would appear that safety measures can be expensive, easily in the $100 million range. Maybe there really isn't a easy fix to this problem, although I admit to have a sense of irony, since the problem that leads to meltdown is the insufficiency of water. Maybe we will ultimately switch to floating nuclear plants and be done with both the concern of flood and long term cooling at the same time.
  • Reevaluation of liquid effluent danger - The battle for stability of the stricken Daiichi plants was wrought with problems of radioactive water. Groundwater contamination will be topic for many years to come, and the ocean in the vicinity did prevent fishing for some time. I wonder if any specific solutions in this area will be called for.
  • Global regulation and crisis management - This was really addressed by the MIT report in emergency planning, but it could have effects more far reaching than what was mentioned. Accusations that the Japanese government made things worse by failing to promptly accept US assistance were rife. Ultimately, however, the safety of the public in a nation is the responsibility of the national regulator. I wonder to what extent people will continue to be satisfied with this structure. It is personally frustrating to have so many good things to say about the US industry safety record, only to be stonewalled by questions about the safety of the Japanese industry because it's simply not in our regulatory purview. I wondering if something more fundamental to the nature of safety and responsibility could be in the cards.

Monday, August 1, 2011

Studies show Thorium can be used in many different reactor types

The sustainability of the nuclear fuel cycle will be an eventual threat to the expansion of nuclear power, even if Uranium is cheap at the moment. As mentors of mine have argued, if we can't move to a more advanced fuel cycle, nuclear power will likely have "no future". The fungibility of fuels used in commercial power plants is critical to this discussion, but it's often not well understood. I'll be offering some perspective on a recent string of papers describing the potential use of Thorium in fast reactors, high temperature gas cooled reactors, and combined fuel cycles with molten salt reactors. I'll also be sharing my feelings on two commercial ventures that hope to put Depleted Uranium and Thorium to work as a power source.

Traveling Wave Reactor

Not long ago there was an article in Technology Review that showcased some new design features of Terrapower's traveling wave proposed reactor. This includes changes said to make the design more "buildable", and possibly more similar to conventional integral fast reactor (IFR) designs. Many experts were never convinced of the design advantages offered by the traveling wave design and 60 year core lifetime and were already asking "why not build an IFR?" I have plenty of my own thoughts on this issue and had considered putting together a full post about the engineering tradeoffs involved with designing a core to breed new fuel and the innovation offered in a traveling wave reactor. In short, building new fuel completely with Uranium-238, breeding it, and burning it without reprocessing is very appealing but there is an engineering conflict with the burnup limits that the fuel materials can handle, and accomplishing this task could be difficult without a long and expensive program to improve fuel performance, and assure performance for a 60 year service life (yikes!). The other challenge is building a core that has fuel for 60 years with a moving active heat producing region. That means that you'll only be getting heat out of 5% of the core but will still be pumping coolant through the entire thing. An old article by the inventors behind the traveling wave reactor reveals their intended solution, which is to regulate flow over different regions of the core using thermocouples, a practice that has not been used in the nuclear industry but would have an obvious economic benefit in any kind of reactor.

Fuel reshuffle without reprocessing may be a compromise between innovation and proven designs that Terrapower goes with. Wikipedia is hosting a fun image illustrating the dream of a 60 year life traveling wave nuclear reactor core. The conventional (Uranium-235) fuel lies in the center, and the wave propagates out from there into the Depleted Uranium fuel (although Thorium could work as well), with the green representing the heat producing region. This kind of burn pattern is the alternative to fuel reshuffle and/or reprocessing and is the essence of a "traveling wave".

Thorium in Light Water Reactors

Out of 440 total, the IAEA counts 359 LWR reactors operating and in shutdown. In other words, that's the majority. Fortunately, there are steps that can be taken to reduce Uranium usage in these rectors by partially replacing it with another fuel. I consider this to be a very important fact to bear in mind as the Uranium supplied from weapons stockpiles runs out, which will be sooner or later. Lightbridge Corporation is a company currently poised to capitalize on the use of Thorium in the most common types of reactors in the world.

Use of Plutonium from LWRs in Advanced Thorium Burning Reactors
Paper: Evaluation of implementation of thorium fuel cycle with LWR and MSR

Researchers from an Australian and Japanese university argued for a comprehensive view of sustainability and a fuel cycle that recycles Plutonium from current LWRs into Molten Salt Reactors (MSR). The MSR concept is a very advanced, very sexy, reactor design with a long history and even has an impressive community following behind it. The Plutonium is only needed to start such reactors, by the way, and from then on they can produce enough new fuel from Thorium to fuel itself and even a little extra.

One major head-turner for me was the focus on electric vehicles (EVs). At first I was doubtful of the connection, but apparently Thoirum is produced as a byproduct from mining for rare-earth minerals such as neodymium and dysprosium, which are precious commodities used in the manufacture of strong permanent magnets in electric motors. It is interesting to note that mining for the materials to make EVs can also produce an energy source that powers it. The concept of a thorium energy bank, or "THE Bank" is argued for. If I understand correctly, surplus Uranium-233 would be sent back to the bank as "interest" on the use of the Thorium. It's a neat idea, but I question if the value of Thorium would justify any such measure, since its abundance and lack of current use would leave the cost at bargain basement prices.

Thorium-Uranium Fast Wave Reactor Concept
Paper: Nuclear burning wave in fast reactor with mixed Th-U Fuel

Several researchers from the Ukraine argue for a type of traveling wave reactor different from the Terrapower idea in this paper. If you refer to the above animation of the Terrapower reactor, imagine the direction coming out of the page, the vertical direction (or z-axis here), that is the direction the below illustration is showing. This idea calls for a traveling wave going up the reactor, instead of expanding out from the middle.

Aside from the difference in geometry, the rest of the idea is very similar. The active fuel region starts in the "ignition zone" where you begin with fissile material and propagates out into the region containing newly bred fuel. This particular design uses an ignition zone at the bottom of the core, which would probably require a neutron reflector to be placed below it. A good argument against this arrangement is that no neutron reflector is perfect, and some neutron economy is lost. The 2nd major problem I mentioned with Terrapower is still there, at any given time only a small fraction of this core will be producing heat and significant pumping is still required for circulating coolant through the rest of the core.

Thorium in Gas Cooled Reactors
Paper: Reactor physics ideas for large scale utilization of thorium in gas cooled reactors

Researches from an Indian research center and a Japanese university lay down some practical ideas for the use of Thorium in gas cooled reactors, noting specifically that Helium coolant and graphite moderators work well for this due to their good neutron economy. Japan has a long history with advanced research reactors and references their High Temperature Test Reactor (HTTR) design here. They propose some modifications to use Thorium in the core and denote this new design HTTR-M. The new design involves two types of Thorium assemblies, one with Thorium rods aligned next to Uranium rods, and one with only Thorium rods, they call "seedless".

The results they publish are very representative of what we should be prepared to expect when mixing fissile and fertile fuel in new ways. In this first graph, they show that the HTTR-M, with Thorium has a much smaller swing in reactivity. That is to say, the neutron balance changes less over the life of that core. This is very good for safety, because the flatter that line is, the less danger there is of accidentally having too much reactivity, which can complicate an accident (see "recriticality" concerns from Fukushima Daiichi).

The next graph shows how the peaking factor changes over the life of the core. The peaking factor is a measure of the highest rate of heat production to the average. This shows that management of the reactivity through the use of Boron as a neutron absorber can basically achieve better results, meaning a more uniform core. Using Boron, however, can be equated to "throwing away" neutrons, neutrons that could be used create more fuel.

In Closing

There are inroads to the use of the abundant fertile isotopes of Thorium and Uranium-238 in just about every reactor design you can think of, and this series of journal articles articulates these specific cases. There will be an art to balancing the use and arrangement of fissile (the "seed") and fertile isotopes in reactors of the future. Nuclear fuel managers technically already do this with Uranium-235 and Uranium-238, but the imperative to stretch the world's Uranium-235 resource will certainly intensify in the future, and much more radical uses of fertile isotopes should be planned for.