All posts by Mikhail Chudakov

Nuclear Power After Fukushima: IAEA Projections

“It is still an exciting time for nuclear power,” International Atomic Energy Agency (IAEA) Director General Yukiya Amano said last January at a lecture in Singapore. Four years after the devastating accident at Japan’s Fukushima Daiichi nuclear plant, what justifies such a view?

Several objective reasons do.

For many countries, nuclear power remains an important option for improving energy security and reducing the impact of volatile fossil-fuel prices. As a stable, base-load source of electricity in an era of ever-increasing global energy demand, nuclear power complements other energy sources—including renewables.

And because nuclear power, together with hydropower and wind energy, has the lowest life cycle greenhouse gas emissions among all power generation sources, it is crucially linked to mitigating the effects of climate change.

A clear correlation links energy poverty and real poverty. Energy is the engine of development. In his vision for Sustainable Energy for All, UN Secretary General Ban Ki-moon says that “all energy sources and technologies have roles to play in achieving universal access in an economically, socially and environmentally sustainable fashion.” Simply put, to provide energy access to everyone, all forms of energy are needed.

Today, 1.3 billion people have no access to modern forms of energy. One billion people lack proper health care due to energy poverty. And 2.6 billion people, more than a third of the world population, still burn biomass for basic energy needs.

Projections

Coupled with concern about securing energy supply and carbon emissions, we get to the current situation: Four years after Fukushima, 30 countries still use nuclear power. About 11% of the world’s electricity comes from 440 operational nuclear reactors. And there are 68 more under construction, with the trend growing.

Speaking of trends: The IAEA’s latest projections from August 2014 show that the world’s nuclear power generating capacity will grow between 8 and 88 % by 2030 (IAEA, 2012). Fukushima may have slowed the growth in nuclear power, but it didn’t stop or reverse it. In short, we expect to see continued expansion in the global use of atomic energy over the next 20 years, especially in Asia, where two-thirds of the reactors currently under construction are being built.

Of the 30 countries that operate nuclear power plants, 13 are either constructing new units or are completing previously suspended construction projects. A further 12 are actively planning to build new units (IAEA, 2014a].

Newcomers

In addition to the 30 established users of nuclear power, about the same number of countries is interested in adding nuclear to its energy mix—the so-called “newcomers.” One thing must be clear: it is the sovereign decision of every country whether to launch a nuclear power program. The IAEA does not try to influence that decision. But when a Member State decides to go that route, the IAEA is there to help (IAEA, 2014a).

The newcomers are at different stages of development: although the majority are currently at the “consideration” stage and have not yet made a national decision, the United Arab Emirates and Belarus are already constructing their first nuclear power plants.

Energy Planning

The future of nuclear power is linked to the future of energy. A country’s energy mix changes over time. Resources that become depleted, too expensive, or environmentally detrimental are replaced by new technologies and energy sources. Hence, energy planning is vital to meeting future capacity needs in ways that are economic, clean, and socially and environmentally responsible.

The IAEA’s energy planning models and tools are used by 130 Member States and by more than 20 regional and international organizations. They assist countries in making informed decisions on future plans, irrespective of their interest in nuclear power.

Fukushima Lessons

Any nuclear power program is a major undertaking. It requires careful planning, preparation and a major investment of time and human resources. Of course, safety, as the Fukushima accident reminded us, is vital to the future development of nuclear power. IAEA Member States responded quickly to the accident by unanimously adopting the IAEA Action Plan on Nuclear Safety (IAEA, 2011) in an effort to look critically at several technical issues in nuclear power production. From severe accident management to communication, from emergency preparedness and response to enhanced research and development, Member States are focusing on lessons learned from the accident to improve nuclear safety in a holistic way.

Innovations

In addition to post-Fukushima safety upgrades in existing reactors, technological advances are also under way to make nuclear power safer and more efficient. Nuclear fusion, fast reactors and closed fuel cycles can extend the use of our resources to thousands of years. Small and medium-sized reactors (SMR) can respond to issues involving the electricity grid and major capital requirements. There are about 45 innovative SMR concepts, with Argentina, China, India and Russia already building theirs (IAEA, 2014).

The Agency assists its Member States, both newcomers as well as experienced users, in establishing the appropriate legal and regulatory framework, and offers know-how on the construction, commissioning, start-up and safe operation of nuclear reactors. The IAEA also establishes nuclear safety standards and security guidance. Its expert peer review missions help Member States in a wide range of areas, including uranium mining, plant safety, secure nuclear facilities, decommissioning and waste management.

The IAEA, in conclusion, helps nations gain or extend access to nuclear power—one of the great applications of atomic energy. By doing so, the Agency fulfills the mandate it adopted six decades ago: to “seek to accelerate and enlarge the contribution of atomic energy to peace, health and prosperity throughout the world.”[1]

[1]           Article 2 of the IAEA’s statute

References in the Article here.

IAEA Deputy Director General, Head of the Department of Nuclear Energy

Skepticism About a Large Nuclear Expansion in the US

The US may not be good enough at large infrastructure projects to do it well

We are currently in the midst of protracted interest in a “nuclear renaissance,” including newfound support amongst some environmentalists concerned enough about climate change to bracket fears about nuclear waste and risk and argue for a role for nuclear power. There are four modern reactors under construction in the southern US, which if completed would be a significant step forward after 20+ years of no new reactors coming on line. And government-led expansion of Generation III and III+ reactors has been rapid and relatively inexpensive in South Korea and China.

Despite this, I remain skeptical about the US significantly expanding its nuclear generating capacity as a way to mitigate climate change in the next several decades. Specifically, I think that such an expansion would require a large push of funding and leadership from the federal government that would probably have to go beyond a simple price on carbon, and I think that would be a poor investment based on the US’s recent track record with nuclear power plants and other large, complex infrastructure projects.

There are many other possible reasons to think the US shouldn’t make such a push, and some of them partially influence my assessment. Intergenerational ethical problems top many people’s lists, as politically embattled nuclear waste that needs to be contained for thousands of years is not the kindest inheritance. Fears of catastrophic risk including terrorism and weapons proliferation are also prominent concerns, and are near the top of my list. There are also other worries that don’t sway me as much but are significant parts of the public debate, including nuclear exceptionalism, the idea that nuclear contamination is a unique kind of harm to humans and the environment that cannot be traded off against other costs and risks.

These kinds of concerns are enough to make even the highly climate-motivated reluctant about nuclear power, and I think the final deciding factor is the significant uncertainty surrounding how quickly the US could really build new plants, and at what cost, especially when nuclear cost curves appear to be increasing. In fairness, much of this uncertainty comes from experiences with interminable construction delays in the 1980’s that were often the result of escalating regulations during construction, or public opposition in certain parts of the country. The 2005 Energy Policy Act streamlined many of the most problematic aspects of plant licensing, and the four new reactors under construction are in Georgia and South Carolina where the public is largely supportive of nuclear energy, hopefully paving the way for easier construction.

But even these four reactors are already experiencing significant delays and cost overruns. The two AP1000 units at Plant Vogtle began construction in 2013 and have already been delayed until at least 2019. With capital costs nearing $15 billion for 2.22 gigawatt (GW) of capacity, a basic Levelized Cost of Energy (LCOE) calculation suggests a break even price of around $0.14/kWh.[1] The two AP1000 units at the VC Summer Generating Station began construction shortly before Plant Vogtle, and are also delayed from their original 2017-2018 completion time (2017 for the first unit, 2018 for the second) to 2019-2020. Costs have also escalated, from $9.8 billion to at least $11.2 billion. This yields an LCOE estimate around $0.1/kWh.[2] This might also fit into a larger trend of US struggles with large infrastructure projects, including notably more expensive subway construction costs than other countries, and significantly more difficulty planning high-speed rail.[3]

Of course, much time has passed since our last construction of plants, so delays and high costs aren’t totally surprising. Maybe if we committed to building many more AP1000’s in a row, then costs and construction times would eventually come down and yield relatively dispatchable and inexpensive low carbon electricity. A large entity like the US government could afford to make such an investment, but it doesn’t seem like a good bet to me given the alternatives. First, the size and complexity (both engineering and regulatory) of modern nuclear plants along with the long time scales for licensing and construction make learning-by-doing more difficult than for other low carbon generators. The extreme contrast is solar photovoltaics (PV): many 100 MW solar PV arrays are being rapidly installed in several months or less, and PV cells are being manufactured at a fast pace, creating greater economies of scale and allowing for more incremental advances than the nuclear plants that take years to license and at minimum 4 years to build. Furthermore, such large projects as nuclear reactors are almost certainly more likely to experience significant delays, and this is especially true of plants where regulatory scrutiny of any changes during construction is intense and time consuming. Lastly, nuclear reactors are probably the only low carbon generators that could fall completely out of public favor as the result of one discrete event – an act of terrorism, the use of a weapon, or a significant accident could all lead to irreparable reversals of trust by the public and thus the government. While the chance of this happening in any one year is small, if we imagine making a large push for learning-by-doing that could take several decades it starts to be a considerable risk.

This isn’t to say that there will be no new nuclear reactors installed in the US in the future. It’s quite likely that there will be at least several more, and it’s possible that costs could come down significantly after this first new wave of reactors is built and spawn a large, spontaneous build-out. There seems to be a strong possibility that China will expand its nuclear fleet, likely benefiting from a strong centralized government and a track record of timely construction. But it has been a long time since the US has built reactors economically, and relative to other countries we might have a harder time executing large, high profile infrastructure projects, especially if they draw significant public interest and possible litigation. This leads me to believe significant government support would be needed to make nuclear expansion a reality, and that it would not be a wise choice even viewed strictly as a carbon-reduction strategy. Momentum matters when tackling a contentious issue like climate change, and the US might be better off putting its effort behind technologies with cost curves that are more obviously declining, and that can be built in a series of smaller victories rather than large, one-GW steps that could be contentious or frequently delayed.

[1]           See LCOE calculation in Literature Cited section.

[2]          See LCOE calculation in Literature Cited section

[3]           See Lepska (2011) for comparison of per-km costs of subway construction in different cities, and Dayen (2015) for a brief review of the sources of delays and         opposition to high speed rail in California.

Daniel Thorpe is a PhD candidate at Harvard School of Engineering & Applied Sciences. 

References of the Article here.

 

BEIJING, China - Chinese President Xi Jinping (R) and U.S. President Barack Obama attend a joint press conference in Beijing on Nov. 12, 2014, following their meeting. They agreed to reduce the risk of military conflict and combat climate change. (Kyodo)

Clean Energy Futures and the Role of Nuclear Power

Thanks to a number of factors – natural disasters, the steady flow of increasingly clear and detailed data, and significant new political accords such as the US-China climate consensus from October 2014 – climate change is now very squarely in the public and political debate (The White House, 2014). Many of us, of course, have been arguing that this should have been the case long ago. In my case I am very pleased to have worked as a contributing and then a lead author to the Intergovernmental Panel on Climate Change since the late 1990s’ (IPCC, 2000).

With the scientific consensus now clear that global emissions must be dramatically reduced, by eighty percent or more by 2050, attention is turning to two themes: 1) what is the permissible budget of fossil fuel use? and 2) What are our viable scientific, technological, economic, and political options to power the economy cleanly before mid-century?

On the first question a series of increasingly clear assessments have appeared that document the oversupply we have of carbon-based fuels. In the latest, high-profile paper, researchers Christophe McGlade and Paul Ekins (2015) make clear that Hubbert’s peak – the rise and then decline in a non-renewable resource such as coal, oil or gas – is largely irrelevant to addressing the climate issue. Fossil fuel scarcity will not initiate the necessary transition.

The environmental bottom line is that to meet our climate targets, cumulative carbon dioxide emissions must be less than 870 to 1,240 gigatonnes (109 tons) between 2011 and 2050 if we are to limit global warming to 2 °C above the average global temperature of pre-industrial times. In contrast to that, however, the carbon contained in our global supply of fossil fuels is estimated to be equivalent to about 11,000 Gt of CO2, which means that the implementation of ambitious climate policies would leave large proportions of reserves unexploited.

There have been several recent calls from people and organizations concerned about global warming to use nuclear electricity generation as part of the solution. This includes The New York Times, the Center for Climate and Energy Solutions (formerly the Pew Center on Global Climate Change), and a number of leading scientists, engineers, and politicians. These calls speak to the potential of nuclear energy technologies to deliver large amounts of low-cost energy. New advanced reactors, small-modular reactors, and fusion are all candidates for providing this energy, with knowledgeable and ardent supporters backing each of these technologies and pathways.

At the same time, there are very serious concerns with both the nuclear power industry as it has developed thus far, and with how it might evolve in the future. Alan Robock of Rutgers University summarizes these concerns in an exceptionally clear editorial piece (Robock, 2014), where he questions the ability of the nuclear power industry to meet needed standards of: 1) proliferation resistance; 2) the potential for catastrophic accidents; 3) vulnerability to terrorist attacks; 4) unsafe operations; 5) economic viability; 6) waste disposal; 7) impacts of uranium mining; and 8) life-cycle greenhouse impacts relative to ”renewables.” Battles back and forth between proponents and detractors are sure to continue, but simply looking at #5 on this list alone – the direct costs and opportunity costs of investing in present-day nuclear power–demonstrates the scale of the challenge.

To address this, consider that of the 437 nuclear plants in operation worldwide today, most will need to be replaced in the coming three decades for nuclear power to even retain its current generation capacity, let alone to grow as a major technology path to address climate change. To examine this future, my students Gang He and Anne-Perrine Arvin (2015) and I have built a model of the entire Chinese energy economy, where nuclear power is expected to play a major role.

Today, China’s power sector accounts for 50% of the country’s total greenhouse gas emissions and 12.5% of total global emissions. The transition from the current fossil fuel-dominated electricity supply and delivery system to a sustainable, resource-efficient system will shape how the country, and to a large extent, the world, addresses local pollution and global climate change. While coal is the dominant energy source today, ongoing rapid technological change coupled with strategic national investments in transmission capacity and new nuclear, solar and wind generation demonstrate that China has the capacity to completely alter the trajectory.

The transition to a low-carbon or “circular” economy is, in fact, the official goal of the Chinese government (SI-S2). In the U.S.-China Joint Announcement on Climate Change, China is determined to peak its carbon emission by 2030 and get 20% of its primary energy from non-fossil sources by the same year. The challenge is making good on these objectives. Installed wind capacity, for example, has sustained a remarkable 80% annual growth rate since 2005, putting China far in the lead globally with over 91 gigawatts (4% of national electricity capacity) of installed capacity in 2013 compared to the next two largest deployments, namely 61 gigawatts (GW) in the United States (5% of total electricity) and 34 GW in Germany (15% of total capacity).

China’s solar power installed capacity has also been growing at an unprecedented pace. Its grid-connected installed solar photovoltaic (PV) capacity has reached 19.42 GW by the end of 2013 (1.6% of total capacity), a 20-fold increase of its capacity in four years from 0.9 GW in 2010. These figures show that rapid technological deployment is possible.

Central to this discussion is the role of nuclear power, because half of all the new nuclear power plants planned by 2030 worldwide are forecast to be built in China (roughly 30 of 60 total nuclear plants anticipated to be constructed over the next 15 years).

The question remains whether this large-scale build-out of nuclear power will happen a) in China; and b) as a significant component of the energy mix in other nations, both industrialized and industrializing.

In our modeling work on both the Chinese and United States energy economies (see the program website: http://rael.berkeley.edu/switch), we find that there is a diverse range of pathways that can achieve the needed 80% emission reduction by mid-century. Some are more solar-dominated (Mileva, et al., 2013), some more wind-driven, some heavily reliant on biological carbon capture (Sanchez, et al., 2015) and so forth. A carbon price of $30 – 40 per ton of carbon dioxide is critical to drive each of these cases, and nuclear is no exception.

Returning to the list of challenges that Alan Robock poses, however, the prospects for nuclear power as a major source of energy are troublesome. This path is contingent on solving a very long and serious list of issues that most energy planners would conclude, at least at present, has not been successfully addressed.

Dr. Daniel M. Kammen is a professor in the Energy and Resources Group, and in the Goldmen School of Public Policy, and in the Department of Nuclear Engineering, and is the Founding Director, Renewable and Appropriate Energy Laboratory (http://rael.berkeley.edu) at the University of California, Berkeley.

References in the Article here.

How Does Nuclear Energy Work?: A brief scientific introduction

The basic principle at the core of most nuclear reactors is simple: pack together enough radioactive material of the right type, and you get a chain reaction in which an atom (let’s say uranium) “splits” into two smaller atoms (i.e. undergoes fission), releasing some heat and also some neutrons (particles at the center of atoms); the neutrons can strike nearby uranium atoms and cause them to split as well, leading to a chain reaction that continues to release heat along with the neutrons that sustain it [figure 1, below[1]].

Screen Shot 2015-06-04 at 4.35.30 PM

This splitting happens naturally at a low rate in uranium, so if you pack the material tightly enough with the right conditions, the process can start on its own. In fact it has happened spontaneously in nature on rare occasions, for example 1.7 billion years ago in Oklo, Gabon, the right convergence of natural uranium and water led to an underground “reactor” that lasted for over 1000 years and produced about 100 kilowatts (kW) of heat on average, roughly equal to the output of 20 standard residential rooftop solar arrays in midday sun. Alhough 100 kW is small, the energy that can be released from such a process per unit of fuel is enormous – 1 metric ton of typical enriched uranium fuel can release over 1 billion kWh of thermal energy over its useful life in a reactor, as much as would be derived from 160,000 metric tons of coal.

Building a device that releases this huge store of energy is quite straightforward. Making such a device both safe and economical is the technical challenge engineers and scientists have labored over for the past 60 years. Additionally, engineers must contend with the problem of nuclear waste disposal and how to prevent undesired parties from using the same technology needed for a benign energy system to instead make a weapon. Each of these topics is complex and deserving of multiple textbooks, but here we briefly overview the technical aspects of plant design, fuel cycles, and waste as a primer for reading some of the articles in this review.

Basic Plant Design

At a high level, all a nuclear power plant is doing is carrying out the chain reaction described above in a controlled way, and then using the resultant heat to produce electricity. Typically, electricity is generated by using the heat to produce steam that turns a generator, in much the same way as in a coal plant or concentrating solar power array.

Screen Shot 2015-06-04 at 4.35.40 PM

Figure 2 [above][2] shows a typical modern “Pressurized Water Reactor” (PWR), with three “loops” of water. The first loop passes through the reactor and picks up heat from the chain reaction, but is so pressurized that it does not actually boil. The water pipes carrying this hot water then pass through a steam generator, where water from a separate loop vaporizes to steam. Note that the water coming directly from the reactor core, containing radioactive elements, ideally never comes in physical contact with the water being turned to steam, it just passes its heat along and heads back to the reactor core. The hot steam then turns a turbine to generate electricity, and later comes into contact with pipes from a third loop carrying cold water. The cold water cools down the steam and condenses it back into liquid water, so it can then flow back to the steam generator and be vaporized again. The cooling loop, several steps removed from the actual nuclear reactions, either passes through an iconic cooling tower (like the one displayed on the cover of this publication) or an external water source like the ocean or a river, releasing the heat into the air or water, but not releasing any physical material from the nuclear reaction.

Of course, the details are more complex, especially what is happening inside the reactor itself. All uranium is not equally useful for sustaining a chain reaction – the most abundant isotope, U238, is fairly difficult to use, while the much less common U235 is more desirable. Natural uranium found today contains around 99.3% U238 and just 0.7% U235, which under most conditions is not enough to carry out a chain reaction as neutrons released by the fissioning (splitting) of one U235 atom are not likely to collide with another U235 atom in time. To run most modern nuclear reactors, the uranium either needs to be “enriched,” by increasing the fraction of U235, or needs to be immersed in a strong “moderator,” a substance that makes neutrons bump into other uranium atoms at a higher rate, thus making a chain reaction more likely. Water, the typical working fluid in reactors as described above, is not a very strong moderator, meaning that the uranium has to be slightly enriched in standard plant designs, usually to 3% U235.     However, other configurations are possible – Canada did not want to enrich nuclear material, so instead built the CANDU fleet of plants using deuterium oxide (“heavy water”) which is a much stronger moderator than H2O, allowing even natural uranium to carry out a chain reaction. This eliminated the need for enrichment facilities to increase the fraction of U235 in fuel, but required facilities to produce heavy water instead.

 

Controlling A Chain Reaction, and Its After-Effects

One obvious question: if a chain reaction is happening in the reactor, releasing ever more heat and neutrons, how do we keep the reaction from “running away” and becoming so hot it melts the reactor? Modern reactors use three main strategies: 1) they are designed with a negative feedback loop, where the reactor becoming hotter slows down the reaction for reasons we will not describe here, 2) they are designed with a “negative void coefficient,” meaning that the reaction slows down or stops if the pressurized water coolant is lost; thus, if the reactor starts to overheat and vaporizes the water, the reaction is slowed or halted, and 3) they use “control rods,” physical rods made of some neutron-absorbing material that can be inserted amongst the fuel rods, absorbing enough neutrons to halt the process. These processes have been very reliable – there have been no major accidents at plants with the above three safety measures.

But there certainly have been accidents at nuclear power plants. They usually involve “decay heat,” which is heat that is released even after the chain reaction has ceased. This heat comes from the continued breakdown of unstable atoms produced in the reaction, and can be of considerable magnitude. A full day after a reaction has been halted, a typical reactor will still be producing 10 Mega Watts (MW) of heat. This is enough to heat all of the water in the “first loop” by over 750 C per day, and would quickly start melting through the reactor vessel and/or start causing explosions if the rest of the loops were not running to draw the heat away. This was the problem at Fukushima – the reaction was halted, but without electricity, the cooling loops could not keep running and the reactor eventually overheated. Managing decay heat is thus one of the central problems addressed in new reactor designs, which brings us to the next section, a brief review of new designs being considered.

 

Improving Plant Design

So far we have reviewed the predominant type of reactor in the world today, the Pressurized Water Reactor using enriched uranium. There are other types, such as the CANDU reactors with heavy water mentioned before, and “boiling water reactors” that allow the first loop of water to boil rather than keeping it liquid with high pressure. But most of the basic principles are the same. To use nuclear industry parlance, all reactors of these types are usually categorized as Generation III, or III+ if they have slightly improved safety and/or performance.

Do we need to improve on this plant design? In some countries, namely China and South Korea, new Generation III and III+ plants are being built fairly economically (roughly cost-competitive with other options) and are deemed safe enough. In the West, however, most countries either deem them unsafe or struggle to build them economically, for a variety of reasons.

Especially given growing interest in low-carbon electricity, much attention is being given to new reactor and plant designs. These are too varied and detailed to treat in depth, but they usually involve some of the following three: 1) improved safety, 2) reduced cost, and 3) reduced waste.

“Passively safe” is a term associated with next-generation plant designs, ideally meaning a plant design where decay heat is handled passively and does not rely on active engineering systems that could fail. A simple example would be to have the reactor resting in a huge pool of coolant all the time, so large that even in the event of indefinite power outage the coolant reservoir is able to handle the decay heat. Costs can be reduced by reducing the complexity of plant design, or by operating at higher temperatures to allow better thermal efficiency in electricity generation. Wastes can be reduced in several ways, such as by modifying the nuclear chain reaction to produce less stable radioactive byproducts, resulting in less total waste with shorter lifetimes.

Some proposed designs attempt to combine multiple improvements, for example small modular reactors (<300 MW) could be significantly safer due to their small size and easier thermal management, and could reduce costs by being easier to assemble in factories with less time for costly on-site construction. Of course, only time and experience will tell if their costs would actually be lower, or whether smaller economies of scale or other factors would make them more expensive.   Most proposed designs trade off between safety, cost, or wastes, for example “fast neutron reactors” can significantly cut waste generation but are usually more costly, or supercritical water reactors that could reduce costs but may not offer much additional inherent safety. But all of these designs are very far from commercial licensing, probably on the order of a decade or longer, and significant financial investment and patience will be required to develop them further and determine with more certainty if any offer a more appealing set of traits than current Generation III reactors.

Fuel Cycle

In the final section of this brief overview, we will examine the basics of the nuclear fuel cycle as it exists in most countries with PWR’s. Natural uranium is mined and sent to a fuel enrichment and fabrication facility. There it is separated into two streams – one enriched in U235, usually to around 3%, and another very depleted in U235, which is usually discarded. Unfortunately, the same equipment used to enrich the uranium to this level for nuclear power can also be used to enrich it further, closer to 90% U235, to make weapons-grade material, leading to ambiguities over whether some countries are enriching uranium for civilian or military purposes.

The enriched fuel can then be used in PWR’s, where it serves as fuel until the level of fissionable isotopes becomes very low again. Notice that the spent fuel leaving the plant now has quite a variety of radioactive products, formed through various reactions happening inside the reactor. The diversity of these wastes adds to the challenge of waste management, as some have half-lives of only several years while others have half-lives of many thousands of years.

Also notice that the spent fuel contains a significant amount of plutonium. This plutonium could also be used as fissionable material in a reactor, so many countries choose to “reprocess” their waste by extracting the plutonium and mixing it with depleted uranium to make more reactor fuel. This process tends to reduce the volume of waste and could be advantageous if uranium were in short supply or expensive, but for now uranium seems relatively abundant and inexpensive, and the reprocessing itself has proven expensive. Pure, fissionable plutonium created through reprocessing also leads to concerns about safety, weapons proliferation, and terrorism. However, despite these concerns, most countries using nuclear energy routinely reprocess their fuel, with the US being a notable exception mostly due its policies that attempted to “lead by example” in reducing weapons proliferation in the 1970’s.

As with plant designs, there are ways to improve on the current fuel cycle. One high level improvement would be to form a “closed” rather than “open” fuel cycle by utilizing different kinds of reactors that generate as much fissionable materials as they consume. Another is to use “fast reactors,” described earlier, to reduce the amount and lifetime of wastes. There are also possible geopolitical improvements, for example a global fuel cycle where a few agreed-upon countries supply fuel and accept waste from other countries. This would allow some countries to have nuclear power plants while never enriching fuel or handling their waste, and for countries like the US to have an easier waste disposal solution. Like the new reactor designs, though, these changes would take a very long time, easily beyond a decade, so if countries or the world decide they are desirable they will require patience.

[1]           Source: Intel Education Resources. http://inteleducationresources.intel.co.uk/examcentre.aspx?id=278

 

[2]           Source: US National Nuclear Regulatory Commission. http://www.nrc.gov/admin/img/art-students-reactors-1-lg.gif

 

thayer

LEED at University Residential Sites: Impact Analysis

Introduction and Overview of LEED

In the 21st century, sustainable development, or maintaining the ability to provide for current needs without compromising the ability to meet future needs, is a primary concern (United Nations). In order to achieve sustainable development, the performance of the built environment must be dramatically improved through more effective energy use without compromising indoor air quality. One prominent effort to promote a high performance built environment is the Leadership in Energy and Environmental Design (LEED) green building accreditation system developed by the United States Green Building Council, USGBC. LEED is a comprehensive system of standards that seeks to promote “buildings and communities [that] will regenerate and sustain the health and vitality of all life within a generation” by defining the characteristics of a sustainable built site (USBGC). Under LEED, a recognized sustainable built site is awarded one of four levels of LEED accreditation: Certified, Silver, Gold, or Platinum.

This study will evaluate the effectiveness of the LEED standard as a vehicle for indicating high performance residential sites at universities. One LEED Silver accredited site and 14 non-accredited sites are evaluated in this study based on three metrics: utility costs per permanent occupant, land affected per permanent occupant, and greenhouse gas emissions per permanent occupant. As the findings will illustrate, lower utility costs and CO2 emissions of the LEED Silver accredited site serve as positive indicators regarding the effectiveness of the LEED rating system, while the land area impacted per permanent occupant by the LEED accredited site is not significantly better the than non-accredited sites. This shortcoming also provides lessons for sustainable building using the LEED system of standards.

Criticism: How effective are LEED Standards?

Since its introduction, some experts have questioned the effectiveness of the LEED system. Important criticism comes from Harvey Bryan, a professor at Arizona State University who was active in the development of ASHRAE 90.1 Appendix G, a universally accepted energy efficiency standard also used to assess projects for the LEED accreditation. Bryan notes that LEED assesses a built site’s performance through modeling the energy use of two structurally identical versions of the site in accordance with ASHRAE 90.1 Appendix G standard (2009, 175). The first version provides a baseline for site performance—if the site was constructed as a “typical” site of the same type. The second version models the site exactly as it will be built. The difference is used for awarding credits toward LEED recognition. According to Bryan, project teams sometimes choose to model an abnormally low performance site for the baseline, dramatically inflating the modeled performance of the project (LEED, 70). In one case, the Biodesign Institute, a LEED Gold accredited project at Arizona State University modeled 60% energy savings, but only realized 21% savings (LEED, 70).

Evaluation: Comparative Performance of LEED Accredited

Versus Non-LEED Accredited Residential Sites

  To assess the effectiveness of the LEED system, the data from these 15 residential sites was compiled and evaluated.

Site Name Number of Occupants Usable Square Feet
Canaday Hall 246 56139
Grays Hall 98 25184
Greenough Hall 83 22297
Hollis Hall 60 18814
Holworthy Hall 84 18827
Hurlbut Hall 58 17335
Lionel Hall 34 8392
Matthews Hall 152 43583
Mower Hall 34 8392
Pennypacker Hall 103 25695
Stoughton Hall 57 19265
Straus Hall 95 22097
Thayer Hall(LEED Silver Accredited) 157 44630
Weld Hall 153 36213
Wigglesworth Hall 202 49274

 

The 15 sites are assessed on three different metrics:[2]

  1. Annual utility cost per permanent occupant
  2. Annual greenhouse gas emissions (GHG) per permanent occupant
  3. Annual land area affected per permanent occupant

Thayer Hall received LEED Sliver accreditation in 2011, so all measurement are based on averages from FY2012, FY2013, and FY2014.[3]

Methodology

To evaluate the performance of each site, utility usage and cost data were acquired using Interval Data Systems’ Energy Witness reporting tool. Unless noted otherwise, all years referenced in this study are Harvard fiscal years. Thayer Hall received its LEED Silver accreditation in July of calendar year 2011, so the data includes fiscal years after accreditation, FY2012, FY2013, and FY2014. The results are an average of FY2012, FY2013 and FY2014 usage and cost data. Averaging three years eliminates abnormalities caused by random short-term fluctuations in usage. The most resource intensive dorm amenities, laundry and irrigation are not distributed evenly among the 15 dorms. Therefore, to allow for a valid comparison, this difference was mitigated by assessing electricity and water consumption of sites with laundry facilities and sites without, then adjusting for usage based on population data and facility distribution. Water and sewage consumption values in this study exclude site irrigation. The GHG emission metric accounts for both utility consumption and the mix of fuels in production. Metrics are calculated on a per occupant basis to allow direct comparison between sites that would otherwise be impossible to compare because the 15 sites are of different sizes and house different numbers of occupants.

Metric: Annual Utility Cost

Annual utility costs provide insight into the economic validity of LEED. USGBC claims that sites with LEED accreditation have lower utility costs than equivalent non-LEED sites. If the LEED accreditation system is valid, then sites granted any level of accreditation should have substantially lower utility costs per occupant then sites without accreditation.

Screen Shot 2015-06-04 at 4.21.10 PM

 

Figure 1

From FY2012-FY2014, LEED Silver accredited Thayer Hall was the third most efficient site, surpassed only by Straus Hall and Weld Hall. Thayer Hall operates at a cost of $95.66/occupant less than the average of the 14 other sites, a 16.3% advantage. Thayer has 157 occupants. In this instance, over $15,000 per year is saved by constructing to LEED’s standards. This demonstrates that LEED can make buildings more cost effective, as claimed.

Metric: Annual Greenhouse Gas Emissions

  Site performance should also be evaluated based on GHG emissions because they provide more complete information about the performance of a site’s fuel mix than utility cost alone, considering that utility cost could be lower due to reliance on cheap, CO2 intensive fuels. The USGBC claims that LEED accredited buildings will emit less CO2 than equivalent sites without accreditation.

GHG emissions for all 15 sites in this study have three sources: electricity consumption from the New England electric grid (eGRID sub region NPCC New England), natural gas consumption, and steam consumption. All fuel emission factors are from the Energy Information Administration. GHG emissions from steam production are adjusted to account for secondary electricity production through Harvard’s district heating system. The results are as follows:

Screen Shot 2015-06-04 at 4.25.22 PM

Figure 2

Thayer Hall emits 1,617 lbs CO2/year per occupant less than the average of the 14 other sites, an 18.1% difference. This indicates that LEED accredited residential sites emit significantly less greenhouse gases per occupant than sites without accreditation. This supports the conclusion that LEED is a valuable tool for certifying low CO2 residential sites in university environments. This is valuable since GHG emissions are understood to be disruptive to global climate systems and many universities face significant pressure from within and without to dramatically reduce their GHG emissions. This metric indicates that constructing built sites to comply with the LEED rating system is an effective strategy for universities to respond to the pressure to reduce GHG emissions.

 

Metric: Land Impact

Built sites also affect land use patterns through energy consumption, producing a “land footprint.” Some cheap and low CO2 energy sources, such as biomass or natural gasderived electricity, have huge land impacts (McDonald). This contributes to food shortages, biodiversity loss, and habitat destruction. As a result, a sustainable built site will optimize its energy mix to impact as small an area as possible while minimizing GHG emissions. In assessing land use, water and sewer usage are assumed to have a negligible impact on land use.

Screen Shot 2015-06-04 at 4.26.09 PM

Figure 3

Thayer Hall’s land use per occupant is the 7th lowest out of the 15 sites assessed and only 1.6 % below the 14-site average. Therefore, the LEED Silver accredited Thayer Hall does not perform as favorably by this metric as it did with utility usage or greenhouse gas emissions.   Thayer Hall’s poor performance is a direct result of the fact that electric power consumption is the most land intensive of all three energy utilities measured, accounting for an average of 65% of land impact across all 15 sites, but only 29.8% of energy consumption. Thayer Hall draws 40.1% of its total energy consumption in the form of electric power, which is more than any other site. Its reliance on electric power accounts for its high land area impacted, a direct consequence of the fact that Massachusetts derives 46% of electric power from land intensive natural gas (2014, 13).

Summary of Metrics and Conclusion

Unfortunately, limitations on available data limited the scope of this study to one LEED accredited site. Additionally, all of the sites studied are located in the city of Cambridge, MA, which has adopted local building energy codes that may be more or less stringent that those used in other localities. This suggests that further research is necessary before any broad conclusions can be made. However, utility costs, greenhouse gas emissions and land impact indicates that the LEED system provides advantageous standards for constructing sites with reduced utility costs and reduced GHG emissions. But the site land impact results suggest that LEED may not enhance site performance on significant metrics that fall outside of its immediate scope, such as land area impacted. Therefore, LEED is a useful tool for developing a sustainable built environment, but institutions that utilize the LEED system should do so with an actively clear vision regarding built site performance, as LEED is not inclusive of every relevant metric of a high performance built environment. As a result, the LEED system is still valuable, butthe construction of sustainable environments requires careful diligence in addition to using the LEED system.

Clifford Goertemiller is an undergraduate at Harvard.

[2] A note on transparency: All data used in this study is available on Harvard’s Energy Witness tool. http://www.energyandfacilities.harvard.edu/tools-resources

[3] The Harvard fiscal year runs from July 1st to June 30th

References of the Article here.

Making climate change meaningful: celebrity vegans and the cultural politics of meat and dairy consumption

The disconnection between what we know about climate change, and how we relate this to the cultural values of our everyday lives, is a gap that needs addressing if we are to deal with this issue in a meaningful and effective way (Doyle 2011a; Corner, Markowitz and Pidgeon 2014). Climate impacts across the globe are becoming increasingly visible and felt, but for those living in the north and western hemisphere, climate change can still feel a distant and remote issue if not experienced directly (Pidgeon 2012; Harvey 2015), with seemingly little connection to the social practices and concerns of our daily lives. Yet, climate change is intimately bound up with our daily activities, from the food that we purchase and the transport we take, to the products we buy and the values we hold. Creating more sustainable societies requires significant changes to our energy intensive lifestyles. But how do we achieve this?

Connecting climate to culture is essential (Doyle 2011a; Hulme 2015). Making climate culturally meaningful involves creating linkages not only between what we do and how this affects our climate, but also using culture (through popular music, arts, literature, media, entertainment and sport) as a way of inspiring and helping achieve social and political change. From this perspective, the dominance of celebrity culture within our media and cultural landscape means that celebrities have an increasing role to play in the cultural politics of climate. Indeed, a growing number of scholars are focusing critical attention upon celebrity involvement in environmental and humanitarian activism (Brockington 2008; Littler 2008; Boykoff and Goodman 2009; Anderson 2013).

Celebrities can help draw attention to an issue and galvanise youth engagement (Alexander 2013). At the same time, the individualisation and commodity relations which support the societal processes of celebritization (Driessens 2013) can be problematic in the context of climate change, which requires significant socio-economic shifts to achieve sustainable societies. Yet, given the disconnect between climate science and the social practices of the everyday, celebrities can act as important intermediaries to help make the complexities of climate change more accessible and relevant to our daily lives. Food is one such area where celebrities can help link the impacts of climate change to our consumption habits. Indeed, as one of the largest contributors to greenhouse gas emissions, the production and consumption of meat and dairy (Gerber et. al 2013), is a crucial social practice that requires further interrogation.

The climate politics of meat and dairy consumption

According to the Food and Agriculture Organization of the United Nations (FAO) meat and dairy consumption contributes 14.5% of human induced global greenhouse gas emissions (Gerber et. al 2013). This report builds upon FAO’s 2006 publication, Livestock’s Long Shadow, which brought significant attention to the climate impacts of the livestock sector. Emissions are produced through land use for livestock pastures and animal feedcrops, leading to significant deforestation; methane via animal effluence; nitrous oxide via animal waste, and water use for the irrigation of animal feed crops, particularly soy beans (FAO 2006). In 2006, the FAO stated that ‘civil society seems to have an inadequate understanding of the scope of the problem’ (FAO 2006, 282).

NGOs have been reticent to engage with this issue for fear of alienating people and for addressing what is perceived to be too personal an issue – the food that we eat (Doyle 2011b, Laestadius 2014). Prominent environmentalists such as Bill McKibben have further contributed to a general reticence within civil society to tackle this issue (McKibben 2010). Likewise, governments have been disinclined to promote the reduction or elimination of meat and dairy because they are generally ‘reluctant to tackle questions of personal choice and consumption’, instead focusing their climate campaigning efforts upon household energy consumption (Robins and Roberts 2006, 39).

This failure to adequately address the climate impacts of meat and dairy illustrates how forms of consumption are embedded within existing socio-cultural practices: meat and dairy consumption is largely conceived as a ‘natural’ practice within western, and middle-class, societies (Heinz and Lee 1998). Lack of engagement by NGOs and governments thus means that the opportunity to link climate to the practices and values of everyday life (even when this involves questioning those values and practices) is significantly reduced.

The celebrity politics of veganism

Despite this failure to engage, the recent rise in the number and profile of celebrity vegans, that is, celebrities from the fields of entertainment, sport and politics who have publically adopted a vegan diet (CBS News 2011) –involving the elimination of meat, dairy, eggs and fish – offers the potential for a previously stigmatised practice (Greenebaum 2012) to achieve mainstream credibility. Forbes announced ‘high-end vegan cuisine’ as one of the Top Ten food trends of 2013 (Bender, 2013). Historically, veganism has been largely viewed in a derogatory way, framed in mainstream media as ridiculous and ‘difficult’, with vegans characterised as ‘oversensitive’, ‘ascetic’ and ‘hostile’ (Cole and Morgan, 2011,139). The increasing visibility of vegan celebrities is thus welcome, bringing an ignored or stigmatised identity (Greenebaum, 2012) into mainstream media culture.1

In order to consider the potential influence of celebrity vegans, it is important to understand their different celebrity profiles. For example, the actor and writer, Alicia Silverstone, most famous for her role in the film Clueless (1995), and comedian and TV entertainer/talkshow host, Ellen DeGeneres, are two prominent female celebrity vegans who use their celebrity status to promote veganism – through books (specifically Silverstone), television interviews, websites and social media. Both are celebrities within the entertainment industry, but present very different approaches to communicating their veganism, and their own vegan philosophy.

Silverstone’s veganism is communicated through her book, The Kind Diet: A Simple Guide to Feeling Great, Losing Weight, and Saving the Planet (2009), which is accompanied by an environmental lifestyle website/blog called The Kind Life, and supported by a Facebook page and Twitter feed (The Kind Life, 2014). As part of The Kind Life brand, the books, website and social media presence work as an integrated platform to promote Silverstone’s personal vegan lifestyle and philosophy. Utilising discourses of self-help and healing, Silverstone places the self as central to veganism, where being kind to oneself and others is the route and basis to becoming vegan.

Historically, animal suffering and anti-speciesism (against distinctions between humans and animals) are the political and ethical basis of veganism (Adams 2010; Cole and Morgan 2011). Silverstone also adheres to this: ‘The dairy industry is, in a word, cruel: That is why I gave up dairy in the first place’ (2009, 42). Silverstone’s vegan ethics, however, are human centred. Discussing why meat and dairy is ‘nasty,’ Silverstone argues that these harm the health of your body, then discusses their impacts on animals, and then the planet.

Silverstone’s Kind Life brand positions veganism through a framework of compassion, care and emotion; an important component of a vegan ecological ethic (Plumwood 2002; Adams 2010). Silverstone makes valuable interconnections between humans, animals and environment, particularly important for an understanding of climate change. Yet, the personalized lifestyle presented by Silverstone – such as shopping choices and socializing with celebrity friends – draws upon and extends her celebrity commodity status, making it difficult to disentangle the political and ethical from the individualized commodity lifestyle of celebrity culture.

Since 2003, Ellen DeGeneres’ – the 5th most powerful celebrity of 2013 (Forbes 2013) –has hosted her daytime talk show, The Ellen DeGeneres Show, where her affable, warm and empathetic persona has made her a mainstream success. Yet, DeGeneres career was significantly affected when she came out as a lesbian in 1997, causing a 3year hiatus in her career. In 2008 she became vegan and had a high profile vegan wedding with actor, Portia De Rossi, which reinforced her celebrity status. DeGeneres’ migrated into vegan lifestyling in 2011 with the launch of her website, Going Vegan with Ellen (Pollack, 2011), which has since been subsumed into a section called ‘Ellen’s Healthy Living’ on the Ellen DeGeneres Show website.

Whereas Silverstone’s vegan message is consistent across all media platforms, DeGeneres’ veganism represents only a part of her celebrity profile. On her website, veganism is presented primarily through discourses of health – ‘Going vegan increases your metabolism, so even if your calories increase, you won’t necessarily be gaining weight’ (Ellen DeGeneres Show, 2014) – and to a lesser extent animal welfare. DeGeneres deploys other celebrity vegans to communicate her message: images of ‘Famous Vegans’ from entertainment, sports and politics appear with descriptions next to each ranging from health and weight loss benefits, animal rights, and to a much lesser extent, environmental concerns. The integration of other celebrities within the promotion of veganism is a strategy that contributes to the celebritization of this issue, increasing the potential reach and accessibility of veganism, yet further inscribing this within celebrity commodity relations (Driessens 2013).

Although it is through a celebrity public self that DeGeneres’ veganism is situated, a private self is revealed in an interview with journalist, Katie Couric (CBS News 2010). DeGeneres explains her veganism as an expression of the need for love, compassion and equality for all humans and species, thus revealing a more radical vegan ethic than is communicated via her talkshow and website. Yet it is through her talkshow that DeGeneres is able to present her beliefs and values in a humorous and non threatening way. DeGeneres has a staggering 29.7 million twitter followers (Silverstone has 249,000). Her tweets replicate the humorous and caring public self of her TV show and website – combining jokes, celebrity promotions, excerpts from her TV show, social and political issues (such as anti-bullying and LGBT equality) and funny/cute animal stories. It is through this relationship with her audience that a consideration of the effects of DeGeneres’ veganism thus needs to be undertaken.

Climate, culture and celebrity

Making climate culturally meaningful is an urgent matter. Celebrities can help by relating the causes and impacts of climate change to existing socio-cultural practices, facilitating not only a questioning of cultural values (such as meat and dairy consumption) but simultaneously making the necessary changes to our habits appear more positive, achievable and accessible. Yet, we must also be mindful of questioning the individualist aspirational lifestyle that accompanies celebrity culture, and be attendant to the range of possibilities for meaningful and wide scale socio-cultural changes necessary for addressing climate change.

 

Julie Doyle is a Reader in Media at the University of Brighton, UK. She is on the Board of Directors of the International Environmental Communication Association (IECA) and a co-founder of the Science and Environment Communication Section of the European Communication and Research Education Association (ECREA).

References cited here.

1 A more detailed analysis of celebrity vegans is presented in my forthcoming article, ‘The politics of being vegan: celebrities, ethics, ecology and feminism’, Environmental Communication, Special issue: Spectacular Environmentalisms (2015).

 

 

Green Industrial Policy: A Climate Necessity

Industrial policy – government support of the manufacturing sector – has long been lampooned as the archetype of failed state intervention. Yet it has seen resurgence as of late, resurrected as a potential strategy for  “green growth.”(1)

The idea of promoting specific industrial aims through government strategy is subject to frequent ridicule. In supporting entrepreneurial activity, policymakers must navigate the dramatic tension between promoting certain initiatives theoretically tied to the public good without permitting capture by special interest. Decoding how particular industrial sectors or urban areas become particularly innovative – biomedicine in Cambridge, MA, or technology in Silicon Valley, CA – has proved a monumental task for economists.

Until recently, it appeared, scholars had arrived at a singular verdict. Industrial policy rarely succeeds, and if it is to have a chance of generating genuine growth, careful design and implementation are paramount.(2) The government, the conventional wisdom holds, should not be in the business of picking winners and losers. The problem, according to industrial policy’s more nuanced critics, is that the government best serves as a catalyst for early-stage projects, but often overstays its welcome. (3)

The complete absence of government involvement in the cultivation of industry and entrepreneurialism is the preserve of only arch-libertarian utopias. The true question is less a matter of “Do governments directly contribute to technological innovation?” and more a question of “How do the governments sponsor innovation, and how much credit should they get for fostering novel ideas?”

Recent scholarship has challenged the prevailing sentiment that the American government has had little impact on the seminal entrepreneurial developments of the past half-century. In her provocatively titled book The Entrepreneurial State: Debunking Public vs. Private Sector Myths, the economist Mariana Mazzucato has gone to great lengths to show how, in the words of economic policy journalist Jeff Madrick, “less and less basic research is being done by companies today. Rather, they focus on the commercial development of the research already done by the government.”(4)  Increasingly, that research is focused on relieving our seemingly intractable reliance on fossil fuels.

Reducing carbon emissions and fostering growth are often cast as irreconcilable goals. That any serious attempt to curb fossil fuel extraction would prove economically deleterious is a mainstay of arguments against decisive action on climate change. “Green growth” provides a rhetorically convenient and theoretically sound moniker for, broadly, policy approaches that encourage low-carbon economic activity, and initiatives that directly support the development of low-carbon technologies.

Empirically, markets are more likely to promote innovation when energy prices are high.(5)  In an influential American Economic Review article published in 2012, Daron Acemoglu of MIT, Philippe Aghion of Harvard, and colleagues found environmental disaster avoidable when governments act swiftly and deliberately with policies that encourage innovation in a low-carbon direction.(6)  In developing countries, the promotion of policies that are both pro-growth and environmentally conscious is not unachievable. (7)

In a recent column, the Nobel Prize-winning economist Paul Krugman found cause for cautious optimism in market-based solutions to climate change. His faith comes from progress in renewable energy technology, leading him to tentatively declare, “It’s even possible that decarbonizing will take place without special encouragement, but we can’t and shouldn’t count on that.”(8)

This newfound hope construes technology as a potent means to avoid entrenched political debates about climate policy. If a disruptive, all-encompassing, low-carbon renewable energy were successful at scale, the collective action problems that define climate change would be significantly reduced.

Yet the global energy infrastructure is predominantly outmoded – designed, from extraction to consumption – for fossil fuels. Even with profitable renewable energy technologies, the social good derived from low-carbon energy generation may outweigh benefit to private industry.  (9)

Critiques of industrial policy range from objections to its efficacy to paranoid claims of Marxism and subversive command economies. Within economic parlance, however, the detrimental impacts of carbon constitute an externality – a market failure that requires government intervention to address. Furthermore, renewable energy represents a particularly fraught problem of coordination.

Take solar power as an example: an advanced panel requires massive public investment to scale, the ability to feed into existing grid infrastructure, friendly tax policies to encourage adoption of solar power by utilities, and – certainly in the long-term – development of energy storage mechanisms.

Even with the potential for markets to naturally move towards renewables, the likelihood that these coordination problems will be endogenously addressed is low. Democratic governments must therefore identify and correct some of these market failures.

The intensity of the climate crisis suggests that green industrial policy may serve as an exception to a general avoidance of industrial policy. Larry Karp and Megan Stevenson, two economists at the University of California, Berkeley, argue that because green industries rely so much on future government policy (subsidies for individual customers adopting renewables, for example) there is little incentive to make large investments in the present.  (10)

Technological change is rarely achieved through unilateral policies. The rich history of how innovations come to fruition demands that policymakers operate with a varied toolkit, not hidebound ideology. Objections to industrial policy, emblematized by the controversy over Solyndra and loan funds dispensed by the Department of Energy in President Obama’s first term, have collapsed into partisan boondoggles rather than well-founded policy debate.

In reality, industrial policy related to energy has long been deployed in the United States. Support for ethanol, a commodity that plays well politically, has been strong since the oil crisis of the 1970s, despite the attendant environmental impacts. Furthermore, subsidization of the conventional oil industry constitutes a kind of industrial policy, although it is rarely directly derided as such. (11)

While a market-based solution to the climate crisis that emerges from a disruptive energy technology is an alluring ideal, a crisis cannot be addressed through a fevered reliance on the potential for innovation. It has to be actively encouraged. The exigencies of environmental decline, and the perverted role of carbon in the economy – CO2 will hit 400 ppm in May – demand a direct government role in energy innovation. (12)

Danny Wilson is a History and Science Concentrator at Harvard University, and the former chair of the Environmental Action Committee.

(1)The economist Dani Rodrik introduces the term “green industrial policy” in his thorough overview, see manuscript, Dani Rodrik, “Green Industrial Policy,” paper written for the Grantham Research Institute, July 2013. Paper at http://www.sss.ias.edu/files/pdfs/Rodrik/Research/Green-growth-and-industrial-policy.pdf.
(2)A survey of (usually) negative sentiments towards industrial policy is in Ricardo Hausmann and Dani Rodrik, “Doomed to Choose: Industrial Policy as Predicament,” paper presented at the First Blue Sky Conference, Center for International Development, Harvard University, September 9, 2006. See http://www.sss.ias.edu/files/pdfs/Rodrik/Research/doomed-to-choose.pdf.
(3)Josh Lerner, Boulevard of Broken Dreams: Why Public Efforts to Boost Entrepreneurship and Venture Capital Have Failed – and What to Do about It. Princeton, NJ: Princeton University Press, 2009. Lerner takes a comparative approach, citing the disparate experiences of countries that have adopted different policies designed to promote new industries.
(4)Jeff Madrick, “Innovation: The Government Was Crucial After All.” The New York Review of Books, April 24, 2014. His review is of Mariana Mazzucato, The Entrepreneurial State: Debunking Public vs. Private Sector Myths.” London and New York: Anthem Press, 2013.
(5) See Richard G. Newell, Adam B. Jaffe, and Robert N. Stavins, “The Induced Innovation Hypothesis and Energy-saving Technological Change,” The Quarterly Journal of Economics 114, no. 3 (1999): 941-975; and David Popp, “Induced Innovation and Energy Prices,” American Economic Review 92, no. 1 (2002): 160-180. Popp in particular argues that incentives in the form of taxation and regulation play a crucial role: “My results also make clear that simply relying on technological change as a panacea for environmental problems is not enough.”
(6)Daron Acemoglu et al., “The Environment and Directed Technical Change,” American Economic Review 120, no. 1 (2012): 131-166.
(7)Alex Bowen, Sarah Cochrane, and Samuel Fankhauser, “Climate Change, Adaptation, and Economic Growth.” Climate Change
(8)Paul Krugman, “Salvation Gets Cheap.” New York Times, April 17, 2014.
(9)David Popp, “Innovation and Climate Policy,” National Bureau of Economic Research Working Paper 15673
(January 2010).
(10) Larry Karp and Megan Stevenson, “Green Industrial Policy: Trade and Theory.” Policy Research Working Paper 6238, October 2012.
(11) Ibid, 34.
(12) For the Keeling Curve, a measure of atmospheric CO2, see http://keelingcurve.ucsd.edu/.
Climate Change Image

Climate Change Migration and Social Innovation

Climate change threatens to displace millions of people either across national borders or to new regions of their own country. While scientists cannot predict the exact number, a joint academic, civil society, and UN study concluded that “the scope and scale could vastly exceed anything that has occurred before” (CARE International et al, iv). Extreme weather events, rising sea levels, desertification, and other environmental disruptions will make certain parts of the globe uninhabitable. Residents of developing countries and small island states are particularly vulnerable to being driven from their homes.

Academics and advocates have urged states to take immediate action and recommended various ways to minimize the disruption faced by cross-border and internal migrants. This essay examines two model instruments. A Convention on Climate Change Refugees was proposed by Tyler Giannini and me in a 2009 article in the Harvard Environmental Law Review. The Peninsula Principles on Climate Displacement within States were initiated by the nongovernmental organization Displacement Solutions and finalized and adopted by in 2013 by an international group of climate change experts that included lawyers, policy makers, and scholars. While these instruments differ in structure and scope, a comparison illuminates elements essential to any framework seeking to address the humanitarian impact of climate change migration: a focus on victims, a range of assistance, shared responsibility, and implementation mechanisms. Both models also approach a complex legal problem from an interdisciplinary point of view.

Two Frameworks

The proposed convention strives to address the needs of cross-border climate change migrants. It defines a climate change refugee as “an individual who is forced to flee his or her home and to relocate temporarily or permanently across a national boundary as the result of sudden or gradual environmental disruption that is consistent with climate change and to which humans more likely than not contributed” (Docherty & Giannini, 361). The proposed convention’s provisions fall into three categories. First, they mandate different types of assistance for climate change refugees. Second, they spread responsibility across host states, home states, and the international community. Third, they establish administrative bodies to ensure other provisions are effectively implemented. The proposed convention would ideally come in the form of a stand-alone legally binding instrument.

The Peninsula Principles seek to minimize the impact of climate change migration on individuals displaced within the boundaries of their own country. They define climate displaced persons as “individuals, households or communities who are facing or experiencing climate displacement,” which is “the movement of people within a State due to the effects of climate change including sudden and slow-onset environmental events and processes, occurring either alone or in combination with other factors” (Peninsula Principles, 16). The principles open with a preamble laying out their humanitarian purposes and international sources as well as an introduction with definitions and overarching provisions. The rest of the document is divided into five sections: general obligations, climate displacement preparation and planning, displacement, post-displacement and return, and implementation. Conceived as an international normative framework, the Peninsula Principles aim to provide “a clear and consistent soft law basis for… practical actions” (Peninsula Principles, 10).

Common Elements

The proposed convention and the Peninsula Principles adopt divergent strategies to an emerging global crisis. While the convention would be a legally binding instrument covering climate change refugees, the principles are designed as a non-binding set of norms applicable to climate displaced persons. A closer look, however, reveals common elements that should serve as the basis for any legal framework that deals with climate change migration.

Focus on Victims

Both the proposed convention and the Peninsula Principles focus on the needs of climate change victims, not the interests of the states from which or within which they migrate. These humanitarian instruments stress the importance of nondiscrimination in order to ensure that individuals receive assistance regardless of their age, sex, race, religion, or other status (Docherty & Giannini, 377-378; Peninsula Principles, 16). In addition, they emphasize the value of victims’ involvement in choices that affect their future. The proposed convention requires the agency established to implement its provisions to “take into account the opinions and concerns of climate change refugees themselves and allow them to participate in decision-making” (Docherty & Giannini, 388). According to the Peninsula Principles, states should consult with climate displaced persons and obtain their consent before relocating them, except when there is an imminent threat to life or limb (Peninsula Principles, 19, 22).

Range of Assistance

The proposed convention and the Peninsula Principles agree that states should provide a range of legal and practical assistance to climate change migrants. Both frameworks require protection of human rights and delivery of humanitarian aid. Drawing on the model of the 1951 Refugee Convention, the proposed convention obligates states to guarantee both civil and political rights, such as access to courts and the freedom to associate, and economic, social, and cultural rights, including rights to education, employment, and housing (Docherty & Giannini, 376-377). The proposed convention goes beyond the Refugee Convention, however, in order to ensure that “basic survival needs are met” (Docherty & Giannini, 378). The Peninsula Principles similarly call upon states to ensure climate displaced persons receive “support[] in claiming and exercising their rights,” and they specifically highlight rights related to housing, livelihood, and access to the justice system (Peninsula Principles, 17, 27). On a more practical level, the principles declare that states should provide humanitarian assistance, such as food, water, shelter, health services, and sanitation (Peninsula Principles, 25).

While the frameworks mandate remedial measures after migration has occurred, they also urge states to take preventive steps. Under the proposed convention, home states are obliged “to the extent possible, to address increased refugee flows before they reach the crisis stage. Crisis prevention could consist of either attempting to eliminate the need for migration or preparing to handle it in an organized way” (Docherty & Giannini, 381). The Peninsula Principles devote a section to “climate displacement preparation and planning.” The principles state that, in advance of climate displacement, countries should develop risk management strategies, identify possible relocation sites, and create institutional frameworks to facilitate the provision of assistance when it becomes necessary (Peninsula Principles, 19-25).

Shared Responsibility

Recognizing that climate change is a “global problem” with an “international cause and transboundary effects,” the two frameworks create systems of shared responsibility (Peninsula Principles, 12; Docherty & Giannini, 379). Both instruments place primary responsibility on the state where the migrants are located. The proposed convention obliges the host state to take the lead on protecting climate change refugees’ rights and providing them adequate humanitarian aid. The home state should supplement that assistance “to the extent possible” by implementing preventive measures and facilitating emigration when it is necessary and refugee return when it is feasible (Docherty & Giannini, 379-382). Because there is no host state in the case of climate displacement, the Peninsula Principles assign all of those responsibilities to the home state.

Given affected states’ limited resources and the problem’s global origin, the two frameworks identify international cooperation and assistance as essential to the solution. According to Docherty and Giannini, “The home and host states should not have to bear the burden of climate change refugees alone because, for the most part, their actions are not the root of the problem” (382). The proposed convention obligates the international community to provide support either to affected states directly or to humanitarian organizations that can deliver aid (Docherty & Giannini, 384). The Peninsula Principles list international cooperation and assistance as one of their general obligations, stating that “[c]limate displacement is a matter of global responsibility, and States should cooperate in the provision of adaptation assistance . . . and protection for climate displaced persons.” The Peninsula Principles grant affected states the right to seek assistance and demand that other states and international agencies provide it (Peninsula Principles, 18).

Implementation Mechanisms

To make the above elements a reality, the two frameworks require implementation mechanisms. The proposed convention focuses on three international bodies. It creates a global fund to “manage the provision of international assistance” (Docherty & Giannini, 385). It establishes a coordinating agency, akin to the Office of the UN High Commissioner for Refugees, to facilitate protection of human rights and delivery of humanitarian aid (Docherty & Giannini, 388-389). It also forms a body of scientific experts to determine who qualifies as a climate change refugee, assess each state’s financial responsibility, and conduct studies to help states better prepare for migration (Docherty & Giannini, 389-391). The Peninsula Principles urge affected states to implement the provisions on preventive and remedial measures at “local, regional, and national” levels (Peninsula Principles, 18). According to the principles, states should adopt relevant laws and policies, earmark financial resources, and “take all appropriate administrative, legislative and judicial measures . . . [to] support and facilitate the provision of assistance and protection to climate displaced persons” (Peninsula Principles, 24).

An Interdisciplinary Approach

Neither human rights law nor international environmental law adequately addresses the humanitarian problem of climate change. Traditional definitions of refugees and internally displaced persons do not encompass climate change migrants, and environmental law does not specifically deal with human migration. For this reason, the proposed convention and the Peninsula Principles take an interdisciplinary approach.

In general, human rights law influences the types of assistance mandated by the climate change migration instruments, while international environmental law informs their more administrative provisions. The proposed convention turns to the Refugee Convention for guidelines on human rights protections for cross-border migrants (Docherty & Giannini, 376-377). It draws on the 1992 UN Framework Convention on Climate Change (UNFCCC) for models for its global fund and body of scientific experts, and for the precedent of assigning international assistance duties according to the standard of “common but differentiated responsibilities” (Docherty & Giannini, 385-391). The Peninsula Principles explicitly build on the 1998 UN Guiding Principles on Internal Displacement by requiring human rights protections and humanitarian aid for climate displaced persons (Peninsula Principles, 16). The Peninsula Principles also call on states to include climate displacement in their National Adaptation Programs of Action, which are mandated by the Conference of the Parties to the UNFCCC (Peninsula Principle, 24).

The interdisciplinary approach of the two climate change migration instruments extends to borrowing from other sources of law, including humanitarian disarmament and indigenous rights. The proposed convention bases its humanitarian aid requirements on the groundbreaking victim assistance provisions in the 2008 Convention on Cluster Munitions, which absolutely bans cluster munitions and establishes positive obligations to mitigate the harm caused by the weapons’ past use (Docherty & Giannini, 378). The designers of the proposed convention also recommend an independent and inclusive negotiating process similar to the Oslo Process that created the cluster munition treaty (Docherty & Giannini, 398-400). The Peninsula Principles look to the 2007 UN Declaration on the Rights of Indigenous Peoples, which recognizes these peoples’ unique relationship to the land. To reduce the impact of climate change on indigenous peoples, the principles state that relocation planning should maintain or replicate “rights to access traditional lands and waters” (Peninsula Principles, 24). Because no existing legal framework comprehensively deals with climate change migration, these solutions to the problem combine components of various precedents that have been tested and found effective.

Conclusion

The proposed Convention on Climate Change Refugees and the Peninsula Principles apply to different categories of climate change migrants and represent different types of legal instruments. Their commonalities, however, should be seen as essential elements of climate change migration law whatever form it may take. The interdisciplinary approach espoused by both the proposed convention and the Peninsula Principles is also crucial to the success of efforts to help people forced to flee their homes and ways of life. Climate change migration is a new humanitarian problem that requires an innovative solution.

Bonnie Docherty is a lecturer on law and senior clinical instructor at Harvard Law School’s International Human Rights Clinic.

References

A Cost and Benefit, Case Study Analysis of Biofuels Systems

Within the past decade, biofuels have become key research initiatives and investments for many states with implications for agricultural and developmental economics. Recent innovations in both first generation (1G) and second generation (2G) biofuels herald a long-term emphasis on energy sustainability and efficiency. 1G energy crops include corn, grains, and sugar cane while lignocellulosic biofuels (2G) derive from corn stover, sugarcane bagasse, and various forest residues. This paper presents a methodology for the economic analysis of investment in different types of biofuel systems. Our paper aims to determine whether 1G and 2G biofuels would be a viable economic and financial investment for typical developed and developing nations, respectively. First, we will collate and analyze the empirical findings on the socioeconomic effects of biofuels to construct a cost-benefit analysis, focusing on US-based and international case studies. We will then analyze the energy and emissions potential of biofuels. Following the qualitative report, we will offer a preview of our Net Present Value (NPV) model and its final results.

 

Socioeconomic Cost-benefit Analysis (Biofuels)

The Socioeconomic Impact of First Generation Biofuels

Impact of Biofuels in Current Consumption and Production in U.S.

As of 2013, first generation biofuels have enjoyed regular and assured growth in the US. First generation biofuels must demonstrate a 20% reduction in lifecycle greenhouse gas (GHG) emissions compared to the baseline of the original fuel (U.S. D.O.E., 2013c).  As a result of this standard, biofuels, predominantly starch ethanol and biodiesel, have been increasingly introduced into fuels since 2005, when the standard was originally implemented as part of the 2005 Energy Policy Act. This trend has endured thanks to the Energy Independence and Security Act of 2007 (EISA) Renewable Fuel Standard (RFS), which requires the blending of renewable fuels with traditional petroleum-based fuels.

Today, biodiesel production is an estimated 135 million gallons in December 2013 with a capacity of 2.2 billion gallons per year (U.S. D.O.E., 2014a). Ethanol production is 1.2 billion gallons in December 2013 with a capacity of 13.852 billion gallons per year (U.S. D.O.E., 2014b).  This is a large increase from 2012, during which time the nation experienced a month-to-month decline in biofuel production due to the drought afflicting many of the nation’s agricultural regions. With the ebbing of the drought in 2013, biofuel production resumed. Ethanol production averaged 925,000 bbl/day in 2014, while biodiesel production averaged 87,000 bbl/d.

 

Anthony Uchicago2

Source: U.S. Department of Energy, 2014a

Projected Consumption and Production in U.S.

Over the following several decades, biofuels experienced some growth, but remained a small portion of the US liquid fuel supply. According to the U.S. Energy Information Administration, biofuels will grow by about 0.4 million barrels per day from 2011 to 2040, thanks to the RFS mandate (U.S. D.O.E., 2013a). This growth rate could increase if the RFS were increased, though for the moment that seems unlikely. However, despite the mandate, overall biofuel growth will remain limited as a result of decreased gasoline consumption, according to a prediction from the Energy Information Administration (EIA). This decline, down to 8.1 million barrels per day in 2022, will also cause biofuels to fall short of the EISA 2007 target. As a result, the mandate is not likely to cause any additional growth in biofuels in this half-century. After 2020, second generation biofuels will overtake 1G biofuels and provide most of the industry’s growth. Annual ethanol consumption is projected to decline to 14.9 billion gallons by 2040. Despite the decline, ethanol will continue to be the primary biofuel in the United States.

Anthony Uchicago1

Source: (U.S. Department of Energy, 2013c)

 

International Impact of Biofuels in Current Consumption & Production

Internationally, biofuel production and consumption is dominated by the United States and Brazil. In 2011, the two nations comprised 70% of global biofuel consumption and 74% of global production (U.S. D.O.E., 2011b). Biofuels in the United States are dominated by corn-based ethanol, while those of Brazil are primarily sugar cane-based. Both fuel types have been growing in use and consumption in the past decade. Other countries, such as France, Germany, and China, contribute to global biofuel production and consumption as well. France, Germany, and other countries favor biodiesel in keeping with the high proportion of diesel vehicles in those countries. Meanwhile, China prefers to use ethanol as a motor fuel. However, in no country other than the United States and Brazil do biofuels contribute a significant portion of the country’s motor fuel or energy supply.

International Projected Consumption and Production

Mirroring current levels, biofuel consumption is projected remain rather low on global scale, even when both 1G and 2G biofuels are included. The total increase in renewable energy consumption, which includes biofuels, is projected to be a meager 4% and to contribute between 11% and 15% of global energy consumption by 2040. More specifically, transportation fuels, the primary use for biofuels, are projected to grow 1.1% per year, or 38% overall, by 2040 (U.S. D.O.E., 2013b)  Among transportation fuels, non-petroleum liquid fuels, a category predominantly composed of biofuels, will experience 3.7% annual growth until 2040, with most of this growth occurring in the United States and Brazil. Overall, while it may seem impressive, this growth is small, if not negligible, in the face of global energy growth. Total global energy consumption will experience 56% growth between 2010 and 2040—from 524 quadrillion British thermal units (Btu) to 820 quadrillion Btu. This overall energy growth will primarily occur in developing countries, while the future of biofuels as a mainstream fuel will probably remain in the US and Brazil.

Environmental Impact and Emissions

Despite its occasional proclamation as a “green” fuel, first-generation biofuels, primarily ethanol, are not without their own GHG emissions. While ethanol does produce fewer overall GHG emissions than gasoline, its production is still an energy intensive process with secondary effects. Gasoline generally produces 8.91 kg CO2 per gallon, compared to 8.02 kg CO2 per gallon for E10 ethanol and 1.34 kg CO2 per gallon for E85 ethanol. Based on a study by Dias de Oliveira et al. (2005), corn-based ethanol requires 65.02 gigajoules (GJ) of energy per hectare (ha) and produces approximately 1236.72 kg per ha of carbon dioxide (CO2), while sugar cane-based ethanol requires 42.43 GJ/ha and produces 2268.26 kg/ha of CO2 under the assumption of non-carbon neutral energy production.  These emissions accrue from agricultural production, crop cultivation, and ethanol processing. Once the ethanol is blended with gasoline, it results in carbon-savings of approximately 0.89 kg of CO2 per gallon consumed (U.S. D.O.E., 2011a).

Secondary Effects

Beyond emissions, 1G biofuel production has many side effects. These effects include negative impacts on land and water, loss of biodiversity, and air pollution. Unlike fossil fuels, the production of biofuels requires large tracts of arable land for production in addition to land for the physical conversion plants. As a result, it suffers from many of the same issues as agriculture itself. These problems include water diversion and pollution, exhaustion of arable land, and destruction of natural habitats.  However, since biofuels can increase the demands on agricultural cultivation, these secondary effects can spread across a wider area as biofuel production grows.

 

Impact on Food Supplies

Price

Since 2000, global food prices have been increasing rapidly. These price increases have affected both developed and developing countries and have been seen globally. The spike in prices eased in 2009-2011 due to the Great Recession, however, food prices have maintained their upward trajectory nonetheless. Causes typically cited for food price increases include competition by biofuels, production issues, policy decisions, and droughts. Biofuel production contributes to the growth of food prices by reducing food production. Corn, as the primary crop used for biofuels, has seen the greatest price increases, and 70% of the growth in corn production was for biofuel production (Mitchell, 2008).  However, due to the nature of the international food market and the usage of other crops, such as sugarcane, for biofuels, prices for all major crops have increased. An estimate by the International Food Policy Research Institute indicates that biofuels may be responsible for 30% of weighted food price increases from 2001-2007 (Rosegrant, 2008).  Continued growth in biofuels can be expected to continue to add to the growth in food prices.

Supply

Based on its agricultural capacity, the United States will never be capable of producing enough first-generation biofuels to meet all of its fuel and energy needs without compromising its food supply and that of other nations that depend on the US for food. In 2005, it took 14.3% of the US corn production was used to replace a mere 1.72% of gasoline usage (Hill et al., 2006).  To achieve a significant long-term reduction in fossil fuel usage through first-generation biofuels alone would be impossible due to this prohibitive impact on the food supply.  As will later be discussed, second generation biofuels may have greater potential to reduce fossil fuel usage while maintaining food supply.

In international locales, we expect largely similar results, particularly in smaller, more densely populated nations. Currently, Brazil, the other major biofuel producer, has more of their fuel provided by biofuels. However, as their population grows and becomes wealthier, we can expect this percentage to decrease as they run into similar agricultural supply problems. If the United States and Brazil, two of the world’s largest agricultural producers, currently experience such difficulties, we can reasonably expect that most other nations will experience similar obstacles.

1G versus 2G

In recent years, the socioeconomic and environmental sustainability of first generation biofuels (1G) has been called into question. The viability of 1G energy crops such as corn, grains, and sugar cane is uncertain, primarily because they compete with food crops, and may not even offer significant GHG emissions reduction. Although there is a tendency to consider sustainability issues regarding 2G energy crops, there are important lessons to be learned from the sustainability challenges posed by 1G crops (Carriquiry et al. 2011). The major sources of lignocellulosic biofuel feedstocks (2G) are as follows: agricultural residues (corn stover, sugarcane bagasse), forest residues, and herbaceous and woody energy crops, including perennial forage grasses like miscanthus (Miscanthus giganteus) and switchgrass (Panicum virgatum).

Issues commonly classified as either environmental, economic or social are often related to each other in complex ways (Mohr & Raman, 2013). For example, food security issues arising from diversion to 1G biofuels might be resolved by production of 2G biofuels because they are not produced from feedstocks commonly used for food production. However, food security quickly becomes a relevant issue when non-food energy crops are grown on land that could potentially be valued in food production or if biofuel production using agricultural residues can be linked to 1G feedstocks. While 2G biofuels can be grown on otherwise marginal land, this land could possibly be utilized by the poor for subsistence (Mohr & Raman, 2013).

Nonetheless, cellulosic energy crops are promising because of their environmental benefits. Madhu Khanna (2008) listed the following potential incentives for transitioning to 2G biofuels: reduced soil erosion, improved sequestration of carbon in the soil and lower inputs of energy, water and agrochemicals. Khanna (2008) notes that environmental benefits vary, among other factors, with the ability of different crops to sequester carbon into soil and with energy input requirements .

Costs of Production

Khanna’s report (2008) includes useful quantitative metrics for assessing the economic viability of cellulosic biofuel energy crops. From a production standpoint, miscanthus can produce 742 gallons of ethanol per acre of land, which is nearly twice as much as corn (399 gal/acre, assuming average yield of 145 bushels per acre under normal corn-soybean rotation) and nearly three times as much as corn stover (165 gal/acre) and switchgrass (214 gal/acre). Production costs are a big impediment to large-scale implementation of 2G biofuels, and their market demand will depend primarily on their price competitiveness relative to corn ethanol and gasoline. At this time, costs of conversion of cellulosic fuels, at $1.46 per gallon, were roughly twice that of corn-based ethanol, at $0.78 per gallon. Cellulosic biofuels from corn stover and miscanthus were 24% and 29% more expensive than corn ethanol, respectively, and switchgrass biofuel is more than twice as expensive as corn ethanol.

Social Impact

Availability of land is undoubtedly one of the key considerations in the discussion of future potential for biofuels. According to a 2010 report published by the World Bank, a major advantage of using agricultural residue crops to produce biofuels is that they do not require additional land. Barring secondary environmental effects, such as their potential usefulness as ground cover, residue crops should have almost no direct impact on food prices. Biofuels produced from crop and forest residues have significantly less land requirements than do dedicated energy crops, such as switchgrass and miscanthus (Carriquiry et al., 2011). Job creation and regional income growth are also important factors to consider in assessing the viability of 2G-biofuel productions. According to a 2010 report published by the International Energy Agency, there is potential for job creation in the cultivation of feedstocks based from dedicated energy crops. If production is based on residue use, then existing farm labor can be utilized, thus prolonging employment past the harvest season (Eisentraut, 2010). Feedstock cultivation and transportation do not require skilled labor and thus there will be a sufficient workforce even in developing economies. The use of residues can also bring added revenue to the agriculture and forestry industries, with beneficial impact on local economies and rural development.

Greenhouse Gas Emissions

Life-cycle analysis is often used to estimate the potential for various biofuel feedstocks to reduce GHG emissions in comparison with gasoline. Khanna’s findings (2008)show that corn and corn stover can reduce greenhouse gas emissions by 37% and 94%, respectively, in comparison to energy equivalent gasoline. Switchgrass and miscanthus, however, are carbon sinks, meaning that they accumulate and store carbon-containing compounds for indefinite periods of time. A more comprehensive table compiled by the World Bank (Carriquiry et al., 2011) shows the relative GHG emission mitigation properties of various biofuels (see below) .

Biofuel Type Emission Reduction (%)
Sugarcane ethanol 65 – 105
Wheat ethanol -5 – 90
Corn ethanol -20 – 55
Sugarbeet ethanol 30 – 60
Lignocellulose ethanol 45 – 112
Rapeseed biodiesel 20 – 80
Palm oil biodiesel 30 – 75
Jatropha biodiesel 50 – 100
Lignocellulose diesel 5 – 120

Source: Carriquiry et al., 2011

Net Present Value (NPV) Model

In addition to our socioeconomic analysis, our full paper contains a net present value (NPV) model that details the economic viability of 1G and 2G biofuels in several national cases.  There are four cases divided among two countries, a representative developed country (United States), and a less developed/developing countries (Brazil). These countries have exhibited potential for biofuel investment in terms of research, land, and crop allocations. The model simulates the rate of return (in dollars), or net benefit, of a conventional investment in 1G generation biofuels and a new investment in 2G generation biofuels over a 15-year time frame. Relevant ratios and metrics given the resulting numbers will also be analyzed in context. We also hope to compare these model figures with that of a coal plant, and if these lands were used to grow regular food crops instead–what is the efficient economic investment? Finally, given this wealth of empirical and quantitative data, we will construct general investment and policy recommendations with applications in policy and economics.

For the purposes of the model, we simulated costs and revenues of Ethanol versus Miscanthus/cellulosic ethanol for the biofuels comparison. We then compare these numbers with the amount of energy per gallon of gasoline and compare this with the price per gallon.

Aggregate Results

Tabulation of Findings:

Description (CASE) (‘000 US$) Developed Nation (2G) CASE A Developing Nation (2G) CASE B Developed Nation (1G) CASE C Developing Nation (1G) CASE D
Operating Profit 209,313 -1,176,017 166,952 -91,300
Net Present Value 100,690 -1,011,217 40,982 39,224
Return on Investment 1.41 0.32 1.17 0.73

 

CASE Table 1: Profit, NPV, and ROI Values

Case A has the highest NPV and Operating Profit. A developed nation with the right amount of investment and the relatively low input costs to produce 2G biofuels can capitalize on the earning potential of 2G biofuels, specifically miscanthus-based cellulosic ethanol. In this case, developed nation plants with well-developed and optimized 2G biofuel plants stand to earn substantial profits.

When choosing between Case A (2G) or Case C (1G) biofuels plant operation for developed nations, Case A 2G biofuels has the highest NPV and should be the preferred choice. We expect this result to hold, especially in the near future when 2G biofuels production becomes more efficient and realizes its cost-savings in inputs as compared to 1G biofuels. While the current capital, chemical, and maintenance costs for 2G biofuels projects are above that of 1G, feedstock costs tend to be lower and projected revenues are higher. Assuming input prices stay the same and innovation and R&D on 2G biofuels leads to lower capital and conversion costs, 2G biofuels could be considered a rewarding investment for developed countries that generates a growing stream of profits.

Case B, or developing nation (2G) plant, should not continue because of a negative NPV value; that is, sustained investment losses. This is because 2G biofuels require large investments initially and revenues would be needed to help cover the cost. In addition, in many developing nations, costs can be high due to corruption, waste and inexperience handling the technology and production processes; furthermore, export or domestic markets can be difficult to find or penetrate.  Another factor is the high relative cost for businesses in developing countries to convert their machinery to biofuel production.

For Case D, we find that while the NPV is positive, indicating that we should push through with the investment, the operating profit is actually negative. Thus, the NPV calculation is deceptive as the project is kept alive by FDI or by financing from investments. The investment in 1G in most developing nations can proceed, but would require some significant public-private investments for the plant and operation survive and produce. Most developing nations are familiar with the production of 1G biofuels, although investment costs may require external support.

Clearly, Case A has the highest ROI of the four cases due to its high potential revenue and low expense of 2G biofuels production.  Although Case D has a positive NPV value, its return to investment is very low and is less than 1, suggesting that in the long-run, 1G biofuels production in a developing country could be unsustainable and unprofitable.

The model findings agree with the empirical evidences presented in Section II. Although the future of biofuels seems secure for most developed countries like the US and developing countries with already robust biofuel industries, such as Brazil, the use of biofuels as a mainstream fuel outside these types of countries is unlikely.

Summary of Recommendations

From the Table, we see that 2G biofuels are generally more profitable than 1G biofuels, although 2G biofuel revenues per gallon in developing countries lag behind those of developed countries. It is possible that the lack of a strong export market and lower domestic demand will reduce the revenue per gallon of a developing nation’s biofuel yield. The lower demand could result from less emphasis on biofuel policy or from the cost of converting machinery to accept biofuels. These developing nations may have to reduce the price of their biofuels to lure buyers away from relatively cheaper gasoline, leading to smaller revenues and risking economic losses in the long run.

Revenues from developing countries for 1G and 2G biofuels can be quite substantial though the profit per barrel is negative due to the high relative cost and inadequate revenue generation to offset these costs. Revenue generation for 2G biofuels is much higher than that of 1G biofuels, suggesting that 2G biofuels could be a lucrative investment for most developing countries in the future, as technology and domestic operations become more inexpensive. For most developing nations, the cost of producing 1G biofuels is cheaper and industry is more familiar with the technology to produce these kinds of biofuels.

The US has the highest production capacity for biofuels and nets the largest NSAR value based on revenues per gallon, while Brazil has the next highest value. Profits per gallon are still generally higher for developed nation 2G biofuels as opposed to 1G biofuels, as reflected in the higher revenue amount for 2G biofuels.

The US and Germany are capable of producing both kinds of biofuels at a profitable rate; however, capacity and total land allocation will ultimately decide the potential of a developed country to produce biofuels.

Anthony Gokianluy,  Matthew Cason,  & Rohit Satishchandra are Green Economics Consultants, The Green Economics Group, University of Chicago. Mr. Gokianluy also serves as a client consultant.  They would like to thank Professor Theodore Steck, M.D. of the University of Chicago and the Center for International Studies for their assistance throughout the writing process.

This article is a commentary on the actual research paper, which can be found here

References