Terra Mulata

Reading through the recent postings I came across a mention of Terra Mulata for the first time. It is associated as a secondary soil to the now becoming recognized Terra Preta soils. I do not assume that this is a particularly recent coinage, but it will surely now be used in conjunction with Terra Preta in scholarly papers. The attached abstract defines both terms and their apparent usage rather well.

This resolves an issue that was bothering me from the beginning and I think is now totally clarified.

It was obvious that a full blown Terra preta soil developed over decades, if not centuries and entailed a lot of ongoing effort every year. Yet it was also obvious that even one year’s effort gave you a productive soil to work with. Now we have a clear resolution of this problem.

Terra preta was solely associated with the settlement site itself where we could expect each family to sustain a hectare of land. The use of an ad hoc earthen kiln would convert a season’s supply of plant waste and cultural waste into a mound of soil and biochar. The clay shards were likely there only because of breakage. Human waste and the like were likely buried in the general waste tip. Every season, this tip was packed down and covered first with a layer of palm fronds to keep the final layer of soil from initially smothering the burn. This was then fired to produce a load of soil and char. This enhanced soil was then carried to the seed hills.

When we come to Terra Mulata we have a slightly different story. We are now dealing with exterior fields that were producing large crops likely to support the state itself (taxes and trade inventory). Corn is a great way to pay your taxes and a great way to store a surplus for much later expenditure and is thus a great example of what is possible.

In this case, they only put in enough char to preserve the soil fertility. There are also no cultural artifacts such as clay shards eliminating the hypothesis that the shards had anything to do with the manufacture of Terra Preta.

The abstract shows that the agricultural culture was very well managed with the existence of large field monocultures not unlike today. You must appreciate that this existed when the only source of energy was man himself. The rise of large field farming in Europe at least had oxen and horses to support it. The diversity of crops is also a surprise. Where were the markets?

We are slowly overcoming the centuries of scholarly dismissal of this culture’s wonderful achievements and are now learning from their achievement.

An earthen corn kiln likely works just fine without the special use of wet clay. And it is very clear that all our tropical soils can be converted to Terra Mulata within a much shorter time frame than expected and does not really need to done too often after initially established.

Going through the numbers again for the corn kiln in particular we have the following;
A given acre of land is cleared by fire. The three sisters are planted in seed hills occupying about twenty five percent of the land. The three sisters consists of corn, legumes for beans and nitrogen fixing and the odd squash to provide soil cover, lowering weed problems and partially protecting the soil from rain erosion.

At the end of season, the dehydrated corn stalks are pulled and used to produce an earthen kiln. There is about ten tons of stover per acre. This will reduce to a little less than two tons of char and admixed soils. Once done, the soil char mix is carried in baskets back to the seed hills.

Eventually the build up of charcoal will naturally migrate into the surrounding soils through any tillage done for other crop types.



Terra Preta and Terra Mulata in Tapajônia: A Subsidy from Culture
Joseph M. McCann

Abstract. The large, multi-ethnic chieftaincy centered on Santarém, Brazil disappeared rapidly in the face of early European slave raids and subsequent missionization, but its physical legacy persists. Ornate ceramics, bamboo forests, relic crops, roads, wells, and manmade waterways in association with patches of anthropogenic dark earth corroborate 17th century chroniclers’ depictions of settled farmers. The evidence suggests the people of Tapajônia lived in permanent settlements and practiced intensive agriculture; they extended the ranges of useful plants, established plantations of fruit-bearing trees, and enhanced soil fertility through inputs of ash, organic mulch and household wastes, creating two kinds of persistently fertile dark earths, the classic terra preta formed under settlements and the somewhat lighter, less chemically enriched terra mulata of agricultural contexts.

Patches of terra preta and terra mulata range from less than 1 ha to more than 100 ha, and are abundantly distributed throughout the region. The spatial distribution and landscape orientation of dark earth patches do not appear to significantly favor varzea or other riverine locations including bluffs, and the largest expanses are located in interior settings. This pattern does not fit the expectations of existing models of Amazonian settlement and subsistence. Rather, it suggests a strategy for optimizing access to a wide range of important resources within a complex landscape mosaic. In addition to inherent ecological factors, the choice of settlement and field locations may have been influenced by access to trade and kinship networks, vulnerability to attack, seasonal transhumance strategies, and cumulative anthropogenic modifications of the landscape.

These modifications, including the enhancement of soil fertility and concentration of useful species and crop germplasm, continue to benefit today’s caboclos, in contrast to most recent development in Amazonia (e.g. mining, ranching, logging). As such, they are an important subsidy from an "extinct" culture. However, access to dark earth is limited by population growth and changing land tenure systems, and the techniques (such as applying mulch and ash, and inoculation with microbiota) which led to their creation are no longer practiced by today’s shifting cultivators. Furthermore, the cultivation of dark earths destroys archaeological artifacts and stratigraphic context that could shed much light on these practices. Clearly, further study of the processes of creation and persistence of Amazonian dark earths are warranted, so that they may serve as models for the development of high-yield, land intensive, yet sustainable land management strategies in the tropics.

J. McCann, Department of Social Sciences, New School University, New York, NY 10011. Email:
arapiuns1@msn.com; mccannj@newschool.edu

What Congress Can do about Oil Now

The USA is currently consuming 20,000,000 barrels of oil per day or about 25% of the global supply. No one is in a better position to reduce consumption on a scale that matters to everyone else. And where the USA leads the rest of the world will surely follow.

The auto industry is now gearing desperately up to transition away from oil based products. Pricing alone is driving this move as consumers have been forced to forgo the pleasure of driving their mobile fuel hogs. No one seems prepared to buy a vehicle for the pleasure of parking it. I certainly expect the advent of a very inexpensive electric for short haul work. It makes eminent sense to use the SUV to haul; the kids to work while using the electric to go into work a lot further away. This is a new version of the two car family that everyone was excited about in the fifties. Yes, I remember all that.

The only problem with that is that it can only be a transitional process and may actually have little net effect on the oil supply even over several years. What is more, any attempt to legislate an outcome will be resented and paid for at the poles.

This again brings me back to the subject of diesel fuel. It represents about half of our consumption and it is primarily used by industry. It has never been very popular in the automotive industry.

It is thus possible for Congress to mandate a crash program to replace diesel with LNG or liquid natural gas. Apparently diesel engines can be converted easily over to LNG and the manufacturers are already prepared to produce LNG engines directly. That means fleet conversion and industrial conversion is possible within a very short time line of between two to five years. I would expect actual completion by the end of the five year cycle.

I would expect that the first two years will be needed to establish the necessary infrastructure.

LNG supplies are not in short supply, although a lot of the easiest supplies are again offshore. In any event, we have massive global supplies available to keep costs down for the trucking industry. It is also the cleanest possible hydro carbon and will go a long way to cleaning up the atmosphere.

California has seen the light and is already well down this road.

What Congress can do today is to mandate a swift conversion of the nation’s diesel consumption over to LNG ASAP.

This can release a possible ten millions of barrels of oil per day back into the global market. And our industry is even using a cheaper fuel.


Gallons of Oil per Barrel
42

U.S. Crude Oil Production
5,102,000 barrels/day
Texas - 1,088,000 barrels/day
U.S. Crude Oil Imports
10,118,000 barrels/day

Top U.S. Crude Oil Supplier
Canada - 1,802,000 barrels/day

U.S. Petroleum Product Imports
3,589,000 barrels/day

U.S. Net Petroleum Imports
12,390,000 barrels/day

Top U.S. Total Petroleum Supplier
Canada - 2,353,000 barrels/day

U.S. Total Petroleum Exports
1,317,000 barrels/day

U.S. Petroleum Consumption
20,687,000 barrels/day

Crude Oil Domestic First Price (2007 wellhead price)
$66.52/barrel

Motor Gasoline Retail Prices (2007 U.S. City Average)
$2.85/gallon

Regular Grade Motor Gasoline Retail Prices (2007 U.S. City Average)
$2.80/gallon

Premium Motor Gasoline Retail Prices (2007 U.S. City Average)
$3.03/gallon

U.S. Motor Gasoline Consumption
9,253,000 barrels/day (388.6 million gallons/day)

U.S. Average Home Heating Oil Price
$2.37/gallon (excluding taxes)

U.S. Refiners Ranked Capacity (1/1/2006) #1 - Baytown, Texas (ExxonMobil) 562,500 barrels/day Top
U.S. Petroleum Refining States #1 - Texas 4,337,026 barrels/day
U.S. Proved Reserves of Crude Oil as of December 31, 2006
20,972 million barrels

Top U.S. Oil Fields (2005)
Prudhoe Bay, AK

Top U.S. Producing Companies (2006)
BP - 827,000 barrels/day

Total World Oil Production (2005)
82,532,000 barrels/day

Total World Petroleum Consumption (2005)
83,607,000 barrels/day

Plasma Storms

This particular story has absolutely nothing to do with the global climate, but I suppose a bit to do with near earth space weather. However, I could not resist posting this story by way of NASA. Amazingly, another problem mystery is now well understood.

As always, the simple idea that the magnetic field protecting Earth collects energy until it hits an explosive release point is almost obvious after the fact. I am sure it was conjectured a time or two. Now we have the hard data.
This also suggests that the build up and discharge should follow a predictable cycle, whose base line can be established to a high level of precision. Perhaps somewhat like the hurricane season.

If we can do this, then it may also be possible to correlate this with any changes taking place on the sun that may be having an effect on the climate, although I think we have many better proxies.

In any event it is all good stuff and a mystery that has not been fully understood is how crystal clear to any school child.
Perhaps once again we are been reminded of the reality of a sea of magnetic energy impacting our planet's magnetic shielding.

Plasma Bullets Spark Northern Lights

07.24.2008 July 24, 2008: Duck! Plasma bullets are zinging past Earth.

That's the conclusion of researchers studying data from NASA's five THEMIS spacecraft. The gigantic bullets, they say, are launched by explosions 1/3rd of the way to the Moon and when they hit Earth—wow. The impacts spark colorful outbursts of Northern Lights called "substorms."

"We have discovered what makes the Northern Lights dance," declares UCLA physicist Vassilis Angelopoulos, principal investigator of the THEMIS mission. The findings appear online in the July 24 issue of Science Express and in print August 14 in the journal Science.

The THEMIS fleet was launched in February 2007 to unravel the mystery of substorms, which have long puzzled observers with their unpredictable eruptions of light and color. The spacecraft wouldn't merely observe substorms from afar; they would actually plunge into the tempest using onboard sensors to measure particles and fields. Mission scientists hoped this in situ approach would allow them to figure out what caused substorms--and they were right.
The discovery came on what began as a quiet day, Feb 26, 2008. Arctic skies were dark and Earth's magnetic field was still. High above the planet, the five THEMIS satellites had just arranged themselves in a line down the middle of Earth’s magnetotail—a million kilometer long tail of magnetism pulled into space by the action of the solar wind. That's when the explosion occurred.

A little more than midway up the THEMIS line, magnetic fields erupted, "releasing about 1015 Joules of energy," says Angelopoulos. "For comparison, that's about as much energy as a magnitude 5 earthquake."

Although the explosion happened inside Earth's magnetic field, it was actually a release of energy from the sun. When the solar wind stretches Earth's magnetic field, it stores energy there, in much the same way energy is stored in a rubber band when you stretch it between thumb and forefinger. Bend your forefinger and—crack!—the rubber band snaps back on your thumb. Something similar happened inside the magnetotail on Feb. 26, 2008. Over-stretched magnetic fields snapped back, producing a powerful explosion. This process is called "magnetic reconnection" and it is thought to be common in stellar and planetary magnetic fields.

The blast launched two "plasma bullets," gigantic clouds of protons and electrons, one toward Earth and one away from Earth. The Earth-directed cloud crashed into the planet below, sparking vivid auroras observed by some 20 THEMIS ground stations in Canada and Alaska. The opposite cloud shot harmlessly into space, and may still be going for all researchers know.

Above: An artist's concept of the THEMIS satellites lined up inside Earth's magnetotail with an explosion between the 4th and 5th satellites.

The THEMIS satellites were perfectly positioned to catch the shot.

"We had bulls-eyes on our solar panels," says THEMIS project scientist David Sibeck of NASA's Goddard Space Flight Center. "Four of the satellites were hit by the Earth-directed cloud, while the opposite cloud hit the fifth satellite." Simple geometry pinpointed the site of the blast between the 4th and 5th satellite or "about 1/3rd of the way to the Moon."

No damage was done to the satellites. Plasma bullets are vast, gossamer structures less dense than the gentlest wisp of Earth's upper atmosphere. They whoosh past, allowing THEMIS instruments to sample the cloud’s internal particles and fields without truly buffeting the satellite.

This peaceful encounter on the small scale of a spacecraft, however, belies the energy deposited on the large scale of a planet. The bullet-shaped clouds are half as wide as Earth and 10 times as long, traveling hundreds of km/s. When such a bullet strikes the planet, brilliant auroras and geomagnetic storms ensue.

Right: A collection of ground-based All-Sky Imagers (ASI) captures the aurora brightening caused by a substorm. Credit: NASA/Goddard Space Flight Center Scientific Visualization Studio.

"For the first time, THEMIS has shown us the whole process in action—from magnetic reconnection to aurora borealis," says Sibeck. "We are finally solving the puzzle of substorms."

The THEMIS mission is scheduled to continue for more than another year, and during that time Angelopoulos expects to catch lots more substorms--"dozens of them," he says. "This will give us a chance to study plasma bullets in greater detail and learn how they can help us predict space weather."

"THEMIS is not finished making discoveries," believes Sibeck. "The best may be yet to come."

New Model Farm Update

New Model Farm Update

After a year of writing this column, it is timely to revisit and update the conceptualization of the new model farm. What have we added to our bag of tools and how can we deploy this throughout the globe?

1 Forest management and optimization will now be fully and economically integrated into farm management because cellulose and lignin harvesting is now becoming possible. Wood chips can be sold at the farm gate rather than presenting a disposal cost or simply been burned. This can be applied from the Arctic tree line to the Amazon Jungle. The lignin is a direct source of gasoline and diesel fuels, while the cellulose is becoming a source of ethanol.

2 Shallow wet lands can become growing pads of cattails again over a range that includes at least the boreal forests and the southern jungles. This crop easily produces around thirty dry tons per acre (150 tons wet) of starch rich root material that again is a good feedstock for ethanol as well as producing product for human consumption.

3 The advent of the first two promotes the optimization of both tree based products and wetland products not easily exploited otherwise.

4 Many additional forms of animal and fish husbandry can also be easily implemented with this expansion of boots on the ground. As simple as bison in the eastern woodlands combined with tight management of venison becomes a very good agrobusiness. And we have barely begun to find ways to produce fish in wetlands.

In fact the energy demands of agriculture alone will drive this agricultural revolution. That we can supply the fuel for long haul travel is a natural outcome of this and a secure method to provide sustainable supply.

I for one never imagined that the boreal forest would ever have any commercial value for agriculture. Now we can imagine cattails and wood chips and even a little moose and caribou husbandry on top of various fish production scenarios. And let us not forget beaver husbandry with which I was intimately involved back in the early eighties.

5 Biochar production from plant wastes and corn in particular because of its sheer volume will be applied to soils globally. This will mange nutrient availability and sustain soil fertility, as well as reconstituting all soils as fertile terra pretas. The only remaining restraint will be the availability of water.

Again I never imagined that it might be possible to manufacture soil. Yet this appears possible. You can take a patch of desert sand and create seed hills of sand mixed with biochar. You then plant corn and legumes and squash in the seed hill and add sufficient water. Use the corn stover to produce more biochar and repeat. Five years of this will establish the soil and another generation will have a splendid bulked out humus rich soil inches thick. This can be all be done without any tools other than a hoe, a shovel and a basket.

6 Cheap nanosolar power will permit atmospheric water harvesting. This opens the door for the recovery of deserts everywhere.

In time, the restoration of the deserts will increase the temperature of the northern hemisphere and produce a much more temperate climate. The arctic will be open for navigation. Global agricultural productivity will respond by rising as these global terraforming efforts go to completion.

In fact it is now possible to imagine every acre of land between the Arctic circles feeling the hand of human husbandry. Even wild woods set aside for conservation can benefit from the harvesting of debris to manage forest fire destruction. It should thus be clear that the earth can sustain and feed populations massively larger than we have ever thought possible. Ten billion is a gimme and only a hundred billion seems rather extreme.

But what is our footprint? With terra preta, most nutrients are readily recycled and even built up in the soils. The expansion of such soils is strictly a function of water production into every available nook and cranny possible. Energy is produced by solar conversion and sustainable harvesting methods. We can have all we want.

Our only limit is in fact the land surface of the earth and its maximum production capacity. With a gross surface area approaching 150,000,000 square kilometers of which we can easily disregard a third, we have access to 100,000,000 square kilometers or ten billion hectares. If we assume that a hectare can support a family, a population of around fifty billion is not too far fetched. By then we should be finding other limits.

The possibility of a zero footprint is now very real. Therefore this population is possible and space colonies become possible two seconds after we invent a continuous one G thruster allowing us to move freely. We now know how to live there, without importing anything once established.

And as I have posted earlier, the probability is no longer zero that the ancestral races of mankind have already done this around fifteen thousands of years ago. I now have way too much conforming evidence. We are merely the smucks left with the nasty job of completing the terraforming process. I suspect that I will have to grind out a book on this subject to properly make the case. To me it is now rather obvious. I just need to organize a publisher.


Bussard Fusion

I have commented on the late Robert Bussard’s efforts to harness fusion energy. This article is an accessible explanation of the technology. As I recently reported the recent effort to replicate aborted earlier work is advancing splendidly and this should result in news.

Their success will put us much closer to a thruster able to lift a large mass by applying and maintaining one G of gross thrust. Although there is obvious enthusiasm for long trips outside the solar system, the real value comes from been able to explore and develop the solar system itself.

Of course having unlimited grid power at a trivial cost is no small bonus also. And it is impossible to overstate the importance of huge amounts of cheap energy. All our difficult processes need huge amounts of energy, usually supplied by very expensive means. To start with, imagine having the energy to disassociate metals in a plasma arc, then blasting them through an electrostatic field to separate them, and then assembling them in a mold to shape a profoundly complex object. All this becomes possible.

Bussard’s work does appear to have a chance and should be watched closely.

Bussard Fusion Systems

I've frequently mentioned a type of electrostatic fusion reactor (and associated rocket systems), designed by Robert Bussard and detailed in the papers listed below. In it the fusion fuel is confined by a spherical voltage potential well of order 100,000 volts. When the fuel reacts, the particles are ejected with energy of order 2 MeV, so escape the potential well of the voltage confinement area. The standard reactor design discused used p+11B (hydrogen and Boron-11) as fuel, since it fuses without releasing any of its energy as radiation or neutrons. All the energy of the reaction is contained in the kinetic energy of released charged particals. If the fusion reaction is surrounded with voltage gradiants or other systems to convert the kinetic energy of high speed charged particals directly to electricity. Virtually all their energy (about 98%) directly to electricity. Making a ridiculously compact and simple electrical generator.

For example: Bussard fusion reactor with a radius of about 2.5-3 meters burning P+11B (hydrogen and Boron-11) produces 4500-8000 megawatts (4.5-8 E9 watts) of electricity and weighs about 14 tons (.00174 kg per KWe (575 KWe/Kg)). That's enough electricity per reactor to power a couple of big cities, and will be the standard power plant of most of our support craft. This isn't nearly enough power for the drive systems of one of the Explorer ships of course, but the system can be scaled up and retain its efficency.

This design is Bussard's variation of the Farnsworth/Hirsch electrostatic confinement fusion technology. Whats important to us is that it makes such a fantasticly small light power plant, that it can be used as part of a high thrust to weight ration propulsion system for space craft and aircraft. Bussard and associates have designed propulsion versions with a specific impulse of between 1500 and 6000 sec. (Best chemical specific impulse is 450.) I'll discus the rocket applications below.

Comparison of Fuel Cycles
As I mentioned above certain fusion fuels release all the energy of their fusion reaction in the kinetic energy of the released particals. The table below lists the various fuels with this charicteristic, and the power they release per kilogram of fuel.

Fuel --> Exhaust
Power / Reaction
Number of Particals
Joule / kg

p + 11B --> 3 4He
8.68 MeV
12
6.926 E13

De + 3He --> 4He + p
18.3 MeV
5
3.505 E14
*





p + 6Li --> 4He + 3He
4.00 MeV
7
5.472 E13
**
3He + 6Li --> 2 4He + p
16. MeV
12
1.277 E14
**
6Li + 6Li --> 3 4He (Combined)
20 MeV
12
1.596 E14
**



3He + 3He --> 4He + 2 p
12.9 MeV
6
2.059 E14
***

* De +3He reactors would have at least some De + De reactions, which produce neutrons. Possibly 2% of the energy will be released this way. This Neutron bombardment could not be directed like the charged particals. So the energy would impact the rear of the ship. Neutron bombardment is a very toxic form of radiation for people and metal, and would cause tremendous heat load on the rear of the ship.

To avoid shielding mass, we could put the reactor behind a shielding wall at the back of the ship, and make the reactor out of open mesh to limit its exposure.

** Since p can be reused in the two stage p + 6Li --> 4He + 3He &;3He + 6Li --> 2 4He + p reaction, its mass isn't counted as consumed in the reaction. This two staged reaction effectivly combines into 6Li + 6Li --> 3 4He with 20 MeV of total output.

*** 3He + 3He reactions are difficult to convert into electricity in a Bussard reactor, but can be used as a direct plasma thrust.

Power is given in Million electron volts (MeV). An electron volt is equal to 1.60219 E-19 Joule, and a Joule / sec is equal to a Watt. Mass in this case is the number of Protons and neutrons involved in the reaction. They each weigh about 1.673 E-27 kg. So since p + 11B has 12 P's and N's per 8.68 MeV reaction. (Note I'm ignoring electrons. Life's to short, they're too light).

8.7 MeV
= 8.7 E6 eV per 12 * 1.673 E-27 kg
= 4.324 E32 eV per / kg
= 6.926 E13 watts / kg (1 ev/second = 1.60219 E-19
Joule / sec)
1 j/s = Watt

Since the fusion reactions listed above, release all their energy in the kinetic energy of their exaust particals; the most direct way to power a rocket is to use these particals directly as a fusion rocket exaust stream. This can be done by placing an uncontained fusion reactor (without any electrical power conversion equipment), in the focus of a paraboloidal shell. The shell then reflects the fusion particles to the rear in a narrow beam (20-30 degree width), making a high efficency plasma rocket.

The efficency of such a rocket depends on the exaust velocity of the fuel, and the energy released per pound of fuel. This is decribed in the table below.

QED Direct plasma thrusters

Fuel --> Exhaust
Power / Reaction
(f) Mass conversion fraction
Joules / kg
Exhaust speed m/s
p + 11B --> 3 4He
8.68 MeV
1287
6.926 E13
11,800,000 m/s
p + 6Li --> 4He + 3He
4.00 MeV
1645
5.472 E13

3He + 6Li --> 2 4He + p
16.0 MeV
704
1.277 E14

6Li + 6Li --> 3 4He (Combined)
20.0 MeV
564
1.596 E14
17,800,000 m/s
3He + 3He --> 4He + 2 p
12.9 MeV
437
2.059 E14
20,300,000 M/s
De + 3He --> 4He + p
18.3 MeV
257
3.505 E14
26,500,000 m/s

Note that: watts are Joules per second.

F is the fraction of the fuel mass converted to energy. I.E. An f of 257, means 1/257th of the fuel mass is converted to energy.

For example: the fusion plasma from a P + 11B reaction has an exaust velocity of 11,800,000 meters per second. This translates to a specific impulse of approximately a *million* seconds. I.E. a pound of fuel consumed would provide a million pounds of thrust for a secound. For higher thrust, thou less effecient, rocket. We can use the fusion plasma to heat a working fluid. If you dilute the plasma with 100 times its weight in the following propellents (assuming I figured out the right equations), the direct QED plasma drive generates the following specific impulses:

Fluid

Specific Impulse (Isp) in seconds
H2
Hydrogen
14,000
NH3
Ammonia
3,500
H2O
Water
5,000

For an externally fueled reaction, or for a short range craft, the high thrust to weigh reactions in the mixed plasma table would be desirable. But for the deceleration into the target starsystem using only fuel and reaction mass stored in the ship. Only the high efficency (high specific impulse), low fuel mass, pure fusion plasma reaction would be usable. It is those reactions we will consider in our fuel load calculations below.

The classic rocket equation gives the fuel to ship mass ratio, needed to get a given change in speed, with a fuel that has a given exaust velocity.

dv = Desired change (or delta) in the ships speed.Vexh = Exaust velocity of the materialM = The fuel mass ratio =Exp(dV/Vexh)

The specific impulse of a fuel (The pounds of thrust, a pound of fuel will give, when burned in a second) is determined by dividing the fuels exaust velocity by 9.8 m/s^2.

The following table shows the fuel mass ratios nessisary to get a ship fueled with various fuels up to the listed speeds. For example, to accelerate a ship up to 1/4the the speed of light (.25 c or 75,000,000 m/s) a ship fueled with hydrogen and boron-11 (p + 11B) would need to start out with 576 times its own weight in fuel. To get the same ship to half the speed of light, it would need to carry over 300,000 times its weight of fuel! Fuel the same ship with duterium and helium-3 (De + 3He) and you'ld only need 17 and 287 times the ships weight of fuel.

Fuel --> Exhaust
Vexh
75E6m/s( .25c )
100 E6 m/s(1/3rd C)
125 E6m/s(.42c)
150 E6m/s(.5 c)
p + 11B --> 3 4He
11,800,000 m/s
576.0
4,790.0
39,900
332,000
6Li + 6Li --> 3 4He(Combined)
17,800,000 m/s
67.6
275.0
1,120
4,570
3He + 3He --> 4He + 2 p
20,300,000 M/s
42.5
138.0
472
1,620
De + 3He --> 4He + p
26,500,000 m/s
16.9
43.5
112
287

Note: that the fuel ratios listed are only enough to accelerate the ship to, OR decelerate the ship from the listed speed. Carrying enough fuel to do both, would require squaring the ratio. In our case that would mean that the amount of fuel that could get you to 1/2 light speed, would only be enough to get you up to and down from 1/4th of light speed.

One thing the table illistrates clearly is why engineers want high exaust speeds and high energy fuels. Doubling the exaust speed can cut over 70%-80% out of the fuel mass needed to get to a speed. It also, all other things beings equal, allows a small lighter engine system and theirfor ship. This is an especially great advantage when the desired maximum speeds are several times greater than the exaust speeds.

So historically most starship design studies have gravitated toward the higher exaust speed fuels like 3He + 3He or De + 3He. However these fuels have a couple of serious practical problems not normally considered. First Helium-3 (3He) and Deuterium (De) are extremly rare isotopes, virtually unknown on Earth. Starship Design studies that used these fuels, were forced to also assume masive mining operations, scooping the materials out of Jupiters atmosphere. Also these fuels are extreamly light gases.
Unless these gases are kept liquid at cryogenic temperatores in sealed tanks, they will boil away into space. In our case where deceleration fuel will need to be stored for a decade or more, this is highly questionable at best. Also the tanks themselves are very bulky and therefore heavy. A tank could probably not carry much more then 20-30 times its own weight in fuel. Which in our case would require a much bigger heavyer ship, and one that drops its empty tanks along route. This becomes absurdly impractical.

An under considered fuel is Lithium-6 (6Li). As you can see from the table it still has fairly good performance as a fuel. But it also has two very large practical advantages. At room temperature it is a solid structural metal requireing half the volume of Helium-3, and not requireing any tanks or structural supports. It could even be used as part of the engine. Allowing the ship to consume part of its engine structure toward the end of the flight. Its secound advantage is that it is a cheap plentiful material on earth. So we can get access to virtually any amount of it we want, cheaply; and we can carry it in space for years without any tanks or leakage problems. This means we can easily carry far greater amounts of it on the ship. So even though it doesn't seem that efficent on paper. It has critical design advantages for a starship, and is the baseline fuel chosen for our fusion driven starship designs.

QEB Fusion/Electric/Thermal rocket

Fusion/Electric/Thermal-rocket motor configuration.

The QED reactor can be used to produce electricity. That electric power can be efficently converted to heat in a reaction mass such as hydrogen or water. Which makes for a good drive system assumed for secoundary thrusters on the starship, and all the primary rocket thrusters on support ships and shuttles.

Again a Bussard fusion reactor with a radius of about 2.5-3 meters burning P+11B produces 4500-8000 mega watts (4.5-8 E9 watts) of electricity and weighs about 14 tons (.00174 kg per KWe (575 KWe/Kg)).
Bussard and co-authors in his various papers have designed rocket systems using these reactors to heat a reaction mass (usually with relativistic electron beams). Giving a specific impulse of between 1500 and 6000 sec, and thrust to weight ratios of 1.5 - 6. (Best chemical specific impulse is 450.) For example, in the Support_craft web document, I have worked up two classes of support ships using these engines.
These are a high speed aerospace shuttle, and a simpler vacume rocket craft.

See:

H. D. Froning, Jr. and R. W. Bussard, "Fusion-Electric Propulsion for Hypersonic Flight," AIAA paper 93-261.

R. W. Bussard and L. W. Jameson, "The QED Engine Spectrum: Fusion-Electric Propulsion for Air-Breathing to Interstellar Flight," AIAA paper 93-2006.

R. W. Bussard and L. W. Jameson, "Some Physics considerations of Magnetic Inertial-Electrostatic Confinement: A New Concept for Spherical Converging-Flow Fusion," Fusion Technology vol 19 (March 1991).
\
R. W. Bussard and L. W. Jameson, "Design Considerations for Clean QED Fusion Propulsion Systems," prepared for the Eleventh Symposium on Space Nuclear Power and Propulsion, Albuquerque, 9-13 Jan 94.

R. W. Bussard , "The QED Engine System: Direct Electric Fusion-Powered Systems for Aerospace Flight Propulsion" by Robert W. Bussard, EMC2-1190-03, available from Energy/Matter Conversion Corp., 9100 A. Center Street, Manassas, VA 22110.

(This is an introduction to the application of Bussard's version of the Farnsworth/Hirsch electrostatic confinement fusion technology to propulsion. Which gives specific impulses of between 1500 and 5000 secounds. Farnsworth/Hirsch demonstrated a 10**10 neutron flux with their device back in 1969 but it was dropped when panic ensued over the surprising stability of the Soviet Tokamak. Hirsch, responsible for the panic, has recently recanted and is back working on QED. -- Jim Bowery)

Lignin Breakthrough

I had given up on any early success in the conversion of wood chips into fuel several months ago. Mother Nature had not designed wood for easy disassembly for obvious reasons. The best promise though was coming from the use of enzymes to break down cellulose. And we have seen some promise there.

The other portion of the wood stock was the lignins. As far as I could see, conversion was not even been tried because of its chemistry. This was altogether an unsatisfactory situation with apparently limited promise of any early resolution.

This article announces an important breakthrough in the processing of the lignin.

Again this is all in its infancy, but it appears to be to be tailor made for the efficient processing of the wood chip feedstock. If one step can separate the cellulose first and then a second stage can be used to process and separate the lignin component, what is left is a much smaller brew of complex chemicals amenable to further processing.

I have not sourced a chemical breakdown of wood, but suffice to say that these two components are it for wood. We can now envisage processing that easily consumes the feed stock while producing a modest amount of other chemicals.

Cellulose can be expected to produce ethanol after been broken down into glucose by enzymes. And this article shows us that we can get diesel and gasoline from the lignin. This is about as good as it gets in terms of likely outcomes..

This article shows us that we have unexpectantly solved the lignin problem. So far, we are still struggling with the cellulose problem, but it just became much easier.

This also reinstates my early drive to bring proper forest management under agricultural control. They are the natural operators if it were possible to create a viable economic model.

I had thought that conversion of wood to biochar a viable option, except it does not compete with corn culture for that application. Corn biochar will naturally powder, while wood will need to be ground or the soil will soon become almost gravelly in texture.

Now we have an economic model that supports solid forest husbandry. An annual clean up of waste wood into the chipper will now produce tons of immediately salable fiber that can be taken to the local fuel producer. There should be enough coin in this process to keep everyone very happy, particularly if the price for the wood is similar to that of straw.

Initially there is a lot of labour involved but that will soon be managed and improved on.

This type of program will not need long term financial support although that should still be part of any long term forest management scheme. Only government is able to operate on century long horizons.


Chemical breakthrough turns sawdust into biofuel

* 17:08 18 July 2008
* NewScientist.com news service
* Colin Barras

A wider of range of plant material could be turned into biofuels thanks to a breakthrough that converts plant molecules called lignin into liquid hydrocarbons.

The reaction reliably and efficiently turns the lignin in waste products such as sawdust into the chemical precursors of ethanol and biodiesel.

In recent years, the twin threats of global warming and oil shortages have led to growth in the production of biofuels for the transportation sector.

But as the human digestive system will attest, breaking down complex plant molecules such as cellulose and lignin is a tricky business.

Food crisis

The biofuels industry has relied instead on starchy food crops such as corn and sugar cane to provide the feedstock for their reactions. But that puts the industry into direct competition with hungry humans, and food prices have risen as a result.

A second generation of biofuels could relieve the pressure on crop production by breaking down larger plant molecules – hundreds of millions of dollars are currently being poured into research to lower the cost of producing ethanol from cellulose.

But cellulose makes up only about a third of all plant matter. Lignin, an essential component of wood, is another important component and converting this to liquid transport fuel would increase yields.

However, lignin is a complex molecule and, with current methods, breaks down in an unpredictable way into a wide range of products, only some of which can be used in biofuels.

Balancing act

Now Yuan Kou at Peking University in Beijing, China, and his team have come up with a lignin breakdown reaction that more reliably produces the alkanes and alcohols needed for biofuels.

Lignin contains carbon-oxygen-carbon bonds that link together smaller hydrocarbon chains. Breaking down those C-O-C bonds is key to unlocking the smaller hydrocarbons, which can then be further treated to produce alkanes and alcohol.

But there are also C-O-C bonds within the smaller hydrocarbons which are essential for alcohol production and must be kept intact. Breaking down the C-O-C bonds between chains, while leaving those within chains undamaged, is a difficult balancing act.In hot water

Kou's team used their previous experience with selectively breaking C-O-C bonds to identify hot, pressurised water – known as near-critical water – as the best solvent for the reaction.Water becomes near-critical when heated to around 250 to 300 °C and held at high pressures of around 7000 kilopascals. Under those conditions, and in the presence of a suitable catalyst and hydrogen gas, it reliably breaks down lignin into smaller hydrocarbon units called monomers and dimers.

The researchers experimented with different catalysts and organic additives to optimise the reaction. They found that the combination of a platinum-carbon catalyst and organic additives such as dioxane delivered high yields of both monomers and dimers.

Under ideal conditions, it is theoretically possible to produce monomers and dimers in yields of 44 to 56 weight % (wt%) and 28-29 wt% respectively. Weight % is the fraction of the solution's weight that is composed of either monomers or dimers.

Easy extraction

Impressively, the researchers' practical yields approached those theoretical ideals. They produced monomer yields of 45 wt% and dimer yields of 12 wt% – about twice what has previously been achieved.
Removing the hydrocarbons from the water solvent after the reaction is easy – simply by cooling the water again, the oily hydrocarbons automatically separate from the water.

It is then relatively simple to convert those monomers and dimers into useful products, says Ning Yan at the Ecole Polytechnique Fédérale de Lausanne, Switzerland, and a member of Kou's team.

That results in three components: alkanes with eight or nine carbon atoms suitable for gasoline, alkanes with 12 to 18 carbons for use in diesel, and methanol.

Efficient process

"For the first time, we have produced alkanes, the main component of gasoline and diesel, from lignin, and biomethanol becomes available," says Yan.

"A large percentage of the starting material is converted into useful products," he adds. "But this work is still in its infancy so other aspects related to economic issue will be evaluated in the near future."

John Ralph at the University of Wisconsin in Madison thinks the work is exciting. He points out that there have been previous attempts to convert lignin into liquid fuels. "That said, the yields of monomers [in the new reaction] are striking," he says.

Richard Murphy at Imperial College London, UK, is also impressed with Kou's work. "I believe that approaches such as this will go a considerable way to help us extract valuable molecules including fuels from all components of lignocellulose," he says.

T.Boone Pickens Goes LNG

You may have picked this up in the press, but T Boone Pickens has publicly stepped up to the plate and is leading two initiatives aimed at securing future energy supplies and freeing the USA from dependence on the middle East in particular.

His strategy has the merit of been applicable immediately. Essentially he is building mega wind farms to produce electricity and aggressively displace natural gas from the power generation business. This is happening now.

The displaced natural gas will be diverted to supply the transportation business. His press coverage mentions both cars and trucks.

In practice, the best immediate improvement will come from converting the diesel fleet directly to LNG for which a huge global resource is in place today.

I will never be the wind power supporter that many are, only because it truly needs to be integrated with a variable energy source that can used to offset shortfalls. The best solution is a power dam with a well filled reservoir and a LNG power generator is not far behind.

In any event nanosolar panels are not too far away and that is capable of dealing with much of our power needs.

LNG however, is the only viable alternative for the long haulage business that can be rolled out immediately and fully installed within a couple of years. Biodiesel suffers from been too little too late and needs a protracted build out on the supply side. The same holds true for ethanol.

It is obvious though that the industrial aspects of all three are been tested and mastered and their availability is eminent. The strength of LNG is that ample feedstock is available now. Feedstocks for either ethanol or biodiesel are grossly insufficient and will take a fair amount of time to nurture. Ethanol, though is well on the way to been a significant part of the fuel cycle, having already reached critical mass. We are now seeing late generation supply crops hit the market, so that sector should be optimized in side of five years.

Pickens is totally correct in his strategy of supporting LNG conversion in the transport industry while diverting LNG from the power generation industry through the immediate build out of wind farms now. The important fact is that all of this can be done now without any delay as either supply is built out or technology matured.

He gives lip service to the use of LNG in the automobile industry which I see as window dressing. The real conversion must take place in the trucking industry which has both the immediate need and the resources. They do not have any option.

The auto industry is embarking on a multi pronged conversion strategy that will shake out over the next five years. On top of that, the consumer even today has options to exploit. Most can easily switch down to a smaller car or public transport fairly easily. The advent of hybrids is now accelerating that process.

The advent of ethanol fuels is at hand and is now been implemented. Supply will soon follow this demand.

Biodiesel is now a cottage industry but is also building momentum.

These are all steps that can remove our need for any use of oil in the transportation sector by themselves.

Far more importantly, $140 oil has convinced everyone in North America that any dependence on offshore oil is A Bad Thing. They will support any and all initiatives that will speed us all out of the oil business. And where we go, the rest of the world will swiftly follow particularly Europe, India and China.

The electric car will slid into the short haul niche as ample electric power is made available. Quite simply, with a nanosolar plant outside of town, it is very attractive to go pick up an inexpensive light weight town car for personal mobility.

Pickens is just getting into this massive market a step ahead of the crush.

As an aside, LNG has always been held back from full exploitation because of perceived handling problems. These seem to not be the problem that they once were but they still require active management that is easily supplied in the trucking industry. In the meantime, global resources are ample for centuries of use and ultimately are backstopped by the oceanic frozen methane reserves. It is also the cleanest burning hydrocarbon fuel and will slash the percentage of CO2 emissions.

The transport industry could switch to biodiesel and perhaps will do some of that also, however, it is better all around to switch to LNG now. It alone will slash smog in the urban environment.

Oil Price History

This rather long report on the history of the oil market is excellent and should be read. It is also a good indicator of where current supply flexibility exists with the advent of extremely high prices. Good luck with the success of the auto formating.

As I have posted, the current price regime cannot be sustained and we are seeing the cracks everywhere. Financial credit is likely still holding prices, but this is in the face of visibly declining demand. There has been no major supply reduction anywhere. In fact marginalized production is coming on stream. Sooner or later we will have a price break that carries the price back toward the $100 mark. Everyone will feel relieved although they should not.

Although we are now moving to get out of the oil business which well we should, how much oil do we really have that is recoverable? Because of this price shock and the flap over global warming which is a direct challenge to take all fossil fuels of line, it is safe to assume that the current daily production of around 87,000,000 barrels is as high as we will ever go.

This works out to around 30 billion barrels per year. That means that we are burning oil at the rate of one trillion barrels every thirty years.

Now mankind has already burned one trillion barrels in the form of cheap conventional oil.

That tells me that all those old reservoirs hold an additional trillion barrels recoverable using THAI (toe and heel air injection).

Canada has a trillion barrels in heavy oil, again mostly best recovered using THAI.

So does Venezuela.

Therefore, before we even get going, we have one hundred years of inventory available.

I fully expect that Canada will have an additional trillion barrels in the Mackenzie River valley and the related Arctic.

Brazil should also be good for another trillion barrels. It is just technically painful since it is, so far, deep ocean or hidden under impervious basalts.

And we do not want to forget the Caribbean and even the worked over regions in the USA.

In other words, before we have left the Americas, I am seeing two hundred years of supply at current production levels and the $100 price structure. The technology is here now to exploit all the resource.

Surely the rest of the world has much more.

It is just that we did not know until recently how to turn the tap on. THAI has solved that.

Energy Economist Newsletter

Oil Price History and Analysis (Updating)

A discussion of crude oil prices, the relationship between prices and rig count and the outlook for the future of the petroleum industry.

Introduction

Crude oil prices behave much as any other commodity with wide price swings in times of shortage or oversupply. The crude oil price cycle may extend over several years responding to changes in demand as well as OPEC and non-OPEC supply.

The U.S. petroleum industry's price has been heavily regulated through production or price controls throughout much of the twentieth century. In the post World War II era U.S. oil prices at the wellhead averaged $24.98 per barrel adjusted for inflation to 2007 dollars. In the absence of price controls the U.S. price would have tracked the world price averaging $27.00. Over the same post war period the median for the domestic and the adjusted world price of crude oil was $19.04 in 2007 prices. That means that only fifty percent of the time from 1947 to 2007 have oil prices exceeded $19.04 per barrel. (See note in box on right.)

Until the March 28, 2000 adoption of the $22-$28 price band for the OPEC basket of crude, oil prices only exceeded $24.00 per barrel in response to war or conflict in the Middle East. With limited spare production capacity OPEC abandoned its price band in 2005 and was powerless to stem a surge in oil prices which was reminiscent of the late 1970s.

Crude Oil Prices 1947 - May, 2008
Crude Oil Prices 1947-2007
Click on graph for larger view

*World Price - The only very long term price series that exists is the U.S. average wellhead or first purchase price of crude. When discussing long-term price behavior this presents a problem since the U.S. imposed price controls on domestic production from late 1973 to January 1981. In order to present a consistent series and also reflect the difference between international prices and U.S. prices we created a world oil price series that was consistent with the U.S. wellhead price adjusting the wellhead price by adding the difference between the refiners acquisition price of imported crude and the refiners average acquisition price of domestic crude.

The Very Long Term View

The very long term view is much the same. Since 1869 US crude oil prices adjusted for inflation have averaged $21.05 per barrel in 2006 dollars compared to $21.66 for world oil prices.

Fifty percent of the time prices U.S. and world prices were below the median oil price of $16.71 per barrel.

If long term history is a guide, those in the upstream segment of the crude oil industry should structure their business to be able to operate with a profit, below $16.71 per barrel half of the time. The very long term data and the post World War II data suggest a "normal" price far below the current price.

Crude Oil Prices 1869-2007
Crude Oil Prices 1867-2007
Click on graph for larger view


The results are dramatically different if only post-1970 data are used. In that case U.S. crude oil prices average $29.06 per barrel and the more relevant world oil price averages $32.23 per barrel. The median oil price for that time period is $26.50 per barrel.

If oil prices revert to the mean this period is likely the most appropriate for today's analyst. It follows the peak in U.S. oil production eliminating the effects of the Texas Railroad Commission and is a period when the Seven Sisters were no longer able to dominate oil production and prices. It is an era of far more influence by OPEC oil producers than they had in the past. As we will see in the details below influence over oil prices is not equivalent to control.

Crude Oil Prices 1970-2007

Crude Oil Prices 1970-2007

Click on graph for larger view

Post World War II

Pre Embargo Period

Crude Oil prices ranged between $2.50 and $3.00 from 1948 through the end of the 1960s. The price oil rose from $2.50 in 1948 to about $3.00 in 1957. When viewed in 2006 dollars an entirely different story emerges with crude oil prices fluctuating between $17 - $18 during the same period. The apparent 20% price increase just kept up with inflation.

From 1958 to 1970 prices were stable at about $3.00 per barrel, but in real terms the price of crude oil declined from above $17 to below $14 per barrel. The decline in the price of crude when adjusted for inflation was amplified for the international producer in 1971 and 1972 by the weakness of the US dollar.

OPEC was formed in 1960 with five founding members Iran, Iraq, Kuwait, Saudi Arabia and Venezuela. Two of the representatives at the initial meetings had studied the the Texas Railroad Commission's methods of influencing price through limitations on production. By the end of 1971 six other nations had joined the group: Qatar, Indonesia, Libya, United Arab Emirates, Algeria and Nigeria. From the foundation of the Organization of Petroleum Exporting Countries through 1972 member countries experienced steady decline in the purchasing power of a barrel of oil.

Throughout the post war period exporting countries found increasing demand for their crude oil but a 40% decline in the purchasing power of a barrel of oil. In March 1971, the balance of power shifted. That month the Texas Railroad Commission set proration at 100 percent for the first time. This meant that Texas producers were no longer limited in the amount of oil that they could produce. More importantly, it meant that the power to control crude oil prices shifted from the United States (Texas, Oklahoma and Louisiana) to OPEC. Another way to say it is that there was no more spare capacity and therefore no tool to put an upper limit on prices. A little over two years later OPEC would, through the unintended consequence of war, get a glimpse at the extent of its power to influence prices.

World Events and Crude Oil Prices 1947-1973
World Events and Crude Oil Prices 1947-1973
Click on graph for larger view

Middle East, OPEC and Oil Prices 1947-1973
Middle East, OPEC and Oil Prices 1947-1973
Click on graph for larger view

Middle East Supply Interruptions

Yom Kippur War - Arab Oil Embargo

In 1972 the price of crude oil was about $3.00 per barrel and by the end of 1974 the price of oil had quadrupled to over $12.00. The Yom Kippur War started with an attack on Israel by Syria and Egypt on October 5, 1973. The United States and many countries in the western world showed support for Israel. As a result of this support several Arab exporting nations imposed an embargo on the countries supporting Israel. While Arab nations curtailed production by 5 million barrels per day (MMBPD) about 1 MMBPD was made up by increased production in other countries. The net loss of 4 MMBPD extended through March of 1974 and represented 7 percent of the free world production.

If there was any doubt that the ability to control crude oil prices had passed from the United States to OPEC it was removed during the Arab Oil Embargo. The extreme sensitivity of prices to supply shortages became all too apparent when prices increased 400 percent in six short months.

From 1974 to 1978 world crude oil prices were relatively flat ranging from $12.21 per barrel to $13.55 per barrel. When adjusted for inflation the price over that period of time world oil prices were in a period of moderate decline.

U.S. and World Events and Oil Prices 1973-1981


Middle East, OPEC and Crude Oil Prices 1947-1973
Click on graph for larger view



OPEC Oil Production 1973-2007


OPEC Production and Crude Oil Prices 1973-Present
Click on graph for larger view

Crises in Iran and Iraq

Events in Iran and Iraq led to another round of crude oil price increases in 1979 and 1980. The Iranian revolution resulted in the loss of 2 to 2.5 million barrels per day of oil production between November, 1978 and June, 1979. At one point production almost halted.

While the Iranian revolution was the proximate cause of what would be the highest prices in post-WWII history, its impact on prices would have been limited and of relatively short duration had it not been for subsequent events. Shortly after the revolution production was up to 4 million barrels per day.

Iran weakened by the revolution was invaded by Iraq in September, 1980. By November the combined production of both countries was only a million barrels per day and 6.5 million barrels per day less than a year before. As a consequence worldwide crude oil production was 10 percent lower than in 1979.

The combination of the Iranian revolution and the Iraq-Iran War cause crude oil prices to more than double increasing from $14 in 1978 to $35 per barrel in 1981.

Twenty-six years later Iran's production is only two-thirds of the level reached under the government of Reza Pahlavi, the former Shah of Iran.

Iraq's production remains about 1.5 million barrels below its peak before the Iraq-Iran War.

Iran Oil production 1973-2007


Middle East, OPEC and Crude Oil Prices 1947-1973

Click on graph for larger view


Iraq Oil production 1973-2007


Middle East, OPEC and Crude Oil Prices 1947-1973

Click on graph for larger view


US Oil Price Controls - Bad Policy?

The rapid increase in crude prices from 1973 to 1981 would have been much less were it not for United States energy policy during the post Embargo period. The US imposed price controls on domestically produced oil in an attempt to lessen the impact of the 1973-74 price increase. The obvious result of the price controls was that U.S. consumers of crude oil paid about 50 percent more for imports than domestic production and U.S producers received less than world market price. In effect, the domestic petroleum industry was subsidizing the U.s. consumer.

Did the policy achieve its goal? In the short term, the recession induced by the 1973-1974 crude oil price rise was less because U.S. consumers faced lower prices than the rest of the world. However, it had other effects as well.

In the absence of price controls U.S. exploration and production would certainly have been significantly greater. Higher petroleum prices faced by consumers would have resulted in lower rates of consumption: automobiles would have had higher miles per gallon sooner, homes and commercial buildings would have been better insulated and improvements in industrial energy efficiency would have been greater than they were during this period. As a consequence, the United States would have been less dependent on imports in 1979-1980 and the price increase in response to Iranian and Iraqi supply interruptions would have been significantly less.


US Oil Price Controls 1973-1981
US Price Controls 1973-1981 Refiners Aquisition Cost of Crude Oil
Click on graph for larger view

OPEC's Failure to Control Crude Oil Prices


OPEC has seldom been effective at controlling prices. While often referred to as a cartel, OPEC does not satisfy the definition. One of the primary requirements is a mechanism to enforce member quotas. The old joke went something like this. What is the difference between OPEC and the Texas Railroad Commission? OPEC doesn't have any Texas Rangers! The only enforcement mechanism that has ever existed in OPEC was Saudi spare capacity.


With enough spare capacity at times to be able to increase production sufficiently to offset the impact of lower prices on its own revenue, Saudi Arabia could enforce discipline by threatening to increase production enough to crash prices. In reality even this was not an OPEC enforcement mechanism unless OPEC's goals coincided with those of Saudi Arabia.


During the 1979-1980 period of rapidly increasing prices, Saudi Arabia's oil minister Ahmed Yamani repeatedly warned other members of OPEC that high prices would lead to a reduction in demand. His warnings fell on deaf ears.

Surging prices caused several reactions among consumers: better insulation in new homes, increased insulation in many older homes, more energy efficiency in industrial processes, and automobiles with higher efficiency. These factors along with a global recession caused a reduction in demand which led to falling crude prices. Unfortunately for OPEC only the global recession was temporary. Nobody rushed to remove insulation from their homes or to replace energy efficient plants and equipment -- much of the reaction to the oil price increase of the end of the decade was permanent and would never respond to lower prices with increased consumption of oil.

Higher prices also resulted in increased exploration and production outside of OPEC. From 1980 to 1986 non-OPEC production increased 10 million barrels per day. OPEC was faced with lower demand and higher supply from outside the organization.

From 1982 to 1985, OPEC attempted to set production quotas low enough to stabilize prices. These attempts met with repeated failure as various members of OPEC produced beyond their quotas. During most of this period Saudi Arabia acted as the swing producer cutting its production in an attempt to stem the free fall in prices. In August of 1985, the Saudis tired of this role. They linked their oil price to the spot market for crude and by early 1986 increased production from 2 MMBPD to 5 MMBPD. Crude oil prices plummeted below $10 per barrel by mid-1986. Despite the fall in prices Saudi revenue remained about the same with higher volumes compensating for lower prices.

A December 1986 OPEC price accord set to target $18 per barrel bit it was already breaking down by January of 1987and prices remained weak.

The price of crude oil spiked in 1990 with the lower production and uncertainty associated with the Iraqi invasion of Kuwait and the ensuing Gulf War. The world and particularly the Middle East had a much harsher view of Saddam Hussein invading Arab Kuwait than they did Persian Iran. The proximity to the world's largest oil producer helped to shape the reaction.

Following what became known as the Gulf War to liberate Kuwait crude oil prices entered a period of steady decline until in 1994 inflation adjusted prices attained their lowest level since 1973.

The price cycle then turned up. The United States economy was strong and the Asian Pacific region was booming. From 1990 to 1997 world oil consumption increased 6.2 million barrels per day. Asian consumption accounted for all but 300,000 barrels per day of that gain and contributed to a price recovery that extended into 1997. Declining Russian production contributed to the price recovery. Between 1990 and 1996 Russian production declined over 5 million barrels per day.


World Events and Crude Oil Prices 1981-1998
World Events and Crude Oil Prices 1981-1998
Click on graph for larger view

U.S. Petroleum Consumption
U.S. Petroleum Consumption
Click on graph for larger view


Non-OPEC Production & Crude Oil Prices

Non-OPEC Production & Crude Oil Prices
Click on graph for larger view


OPEC Production & Crude Oil Prices
OPEC Production & Crude Oil Prices
Click on graph for larger view

Russian Crude Oil Production

Russian Crude Oil Production
Click on graph for larger view

OPEC continued to have mixed success in controlling prices. There were mistakes in timing of quota changes as well as the usual problems in maintaining production discipline among its member countries.

The price increases came to a rapid end in 1997 and 1998 when the impact of the economic crisis in Asia was either ignored or severely underestimated by OPEC. In December, 1997 OPEC increased its quota by 2.5 million barrels per day (10 percent) to 27.5 MMBPD effective January 1, 1998. The rapid growth in Asian economies had come to a halt. In 1998 Asian Pacific oil consumption declined for the first time since 1982. The combination of lower consumption and higher OPEC production sent prices into a downward spiral. In response, OPEC cut quotas by 1.25 million b/d in April and another 1.335 million in July. Price continued down through December 1998.

Prices began to recover in early 1999 and OPEC reduced production another 1.719 million barrels in April. As usual not all of the quotas were observed but between early 1998 and the middle of 1999 OPEC production dropped by about 3 million barrels per day and was sufficient to move prices above $25 per barrel.

With minimal Y2K problems and growing US and world economies the price continued to rise throughout 2000 to a post 1981 high. Between April and October, 2000 three successive OPEC quota increases totaling 3.2 million barrels per day were not able to stem the price increases. Prices finally started down following another quota increase of 500,000 effective November 1, 2000.


World Events and Crude Oil Prices 1997-2003

World Events and Crude Oil Prices 1997-2003

Click on graph for larger view

OPEC Production 1990-2007

OPEC Production 1990-2005

Click on graph for larger view

Russian production increases dominated non-OPEC production growth from 2000 forward and was responsible for most of the non-OPEC increase since the turn of the century.

Once again it appeared that OPEC overshot the mark. In 2001, a weakened US economy and increases in non-OPEC production put downward pressure on prices. In response OPEC once again entered into a series of reductions in member quotas cutting 3.5 million barrels by September 1, 2001. In the absence of the September 11, 2001 terrorist attack this would have been sufficient to moderate or even reverse the trend.

In the wake of the attack crude oil prices plummeted. Spot prices for the U.S. benchmark West Texas Intermediate were down 35 percent by the middle of November. Under normal circumstances a drop in price of this magnitude would have resulted an another round of quota reductions but given the political climate OPEC delayed additional cuts until January 2002. It then reduced its quota by 1.5 million barrels per day and was joined by several non-OPEC producers including Russia who promised combined production cuts of an additional 462,500 barrels. This had the desired effect with oil prices moving into the $25 range by March, 2002. By mid-year the non-OPEC members were restoring their production cuts but prices continued to rise and U.S. inventories reached a 20-year low later in the year.

By year end oversupply was not a problem. Problems in Venezuela led to a strike at PDVSA causing Venezuelan production to plummet. In the wake of the strike Venezuela was never able to restore capacity to its previous level and is still about 900,000 barrels per day below its peak capacity of 3.5 million barrels per day. OPEC increased quotas by 2.8 million barrels per day in January and February, 2003.

On March 19, 2003, just as some Venezuelan production was beginning to return, military action commenced in Iraq. Meanwhile, inventories remained low in the U.S. and other OECD countries. With an improving economy U.S. demand was increasing and Asian demand for crude oil was growing at a rapid pace.

The loss of production capacity in Iraq and Venezuela combined with increased OPEC production to meet growing international demand led to the erosion of excess oil production capacity. In mid 2002, there was over 6 million barrels per day of excess production capacity and by mid-2003 the excess was below 2 million. During much of 2004 and 2005 the spare capacity to produce oil was under a million barrels per day. A million barrels per day is not enough spare capacity to cover an interruption of supply from most OPEC producers.

In a world that consumes over 80 million barrels per day of petroleum products that added a significant risk premium to crude oil price and is largely responsible for prices in excess of $40-$50 per barrel.

Other major factors contributing to the current level of prices include a weak dollar and the continued rapid growth in Asian economies and their petroleum consumption. The 2005 hurricanes and U.S. refinery problems associated with the conversion from MTBE as an additive to ethanol have contributed to higher prices.

World Events and Crude Oil Prices 2001-2007

World Events and Crude Oil Prices 2001-2005

Click on graph for larger view

Russian Crude Oil Production

Russian Crude Oil Production

Click on graph for larger view

Venezuelan Oil Production

Russian Crude Oil Production
Click on graph for larger view

Excess Crude Oil Production Capacity

Excess Crude Oil Production Capacity

Click on graph for larger view

One of the most important factors supporting a high price is the level of petroleum inventories in the U.S. and other consuming countries. Until spare capacity became an issue inventory levels provided an excellent tool for short-term price forecasts. Although not well publicized OPEC has for several years depended on a policy that amounts to world inventory management. Its primary reason for cutting back on production in November, 2006 and again in February, 2007 was concern about growing OECD inventories. Their focus is on total petroleum inventories including crude oil and petroleum products, which are a better indicator of prices that oil inventories alone.
World Events and Crude Oil Prices 2004-2007

Impact of Prices on Industry Segments

Drilling and Exploration

Boom and Bust



The Rotary Rig Count is the average number of drilling rigs actively exploring for oil and gas. Drilling an oil or gas well is a capital investment in the expectation of returns from the production and sale of crude oil or natural gas. Rig count is one of the primary measures of the health of the exploration segment of the oil and gas industry. In a very real sense it is a measure of the oil and gas industry's confidence in its own future.

At the end of the Arab Oil Embargo in 1974 rig count was below 1500. It rose steadily with regulated crude oil prices to over 2000 in 1979. From 1978 to the beginning of 1981 domestic crude oil prices exploded from a combination of the the rapid growth in world energy prices and deregulation of domestic prices. At that time high prices and forecasts of crude oil prices in excess of $100 per barrel fueled a drilling frenzy. By 1982 the number of rotary rigs running had more than doubled.

It is important to note that the peak in drilling occurred over a year after oil prices had entered a steep decline which continued until the 1986 price collapse. The one year lag between crude prices and rig count disappeared in the 1986 price collapse. For the next few years the economy of the towns and cities in the oil patch was characterized by bankruptcy, bank failures and high unemployment.

U.S. Rotary Rig Count 1974-2005


Crude Oil and Natural Gas Drilling


U.S. Rotary Rig Count 1974-1997 Crude Oil and Natural Gas Drilling
Click on graph for larger view

After the Collapse


Several trends were established in the wake of the collapse in crude prices. The lag of over a year for drilling to respond to crude prices is now reduced to a matter of months. (Note that the graph on the right is limited to rigs involved in exploration for crude oil as compared to the previous graph which also included rigs involved in gas exploration.) Like any other industry that goes through hard times the oil business emerged smarter, leaner and more conservative. Industry participants, bankers and investors were far more aware of the risk of price movements. Companies long familiar with accessing geologic, production and management risk added price risk to their decision criteria.

Technological improvements were incorporated:

Increased use of 3-D seismic data reduced drilling risk.

Directional and horizontal drilling led to improved production in many reservoirs.

Financial instruments were used to limit exposure to price movements.

Increased use of CO2 floods and improved recovery methods to improve production in existing wells.

In spite of all of these efforts the percentage of rigs employed in drilling for crude oil decreased from over 60 percent of total rigs at the beginning of 1988 to under 15 percent until a recent resurgence.

U.S. Rotary Rig Count


Exploration for Oil


U.S. Rotary Rig Count Exploration for Oil
Click on graph for larger view

U.S. Rotary Rig Count
Percent Exploring for Crude Oil
U.S. Rotary Rig Count Percent Exploring for Crude Oil
Click on graph for larger view



Well Completions - A measure of success?

Rig count does not tell the whole story of oil and gas exploration and development. It is certainly a good measure of activity, but it is not a measure of success.

After a well is drilled it is either classified as an oil well, natural gas well or dry hole. The percentage of wells completed as oil or gas wells is frequently used as a measure of success. In fact, this percentage is often referred to as the success rate.

Immediately after World War II 65 percent of the wells drilled were completed as oil or gas wells. This percentage declined to about 57 percent by the end of the 1960s. It rose steadily during the 1970s to reach 70 percent at the end of that decade. This was followed by a plateau or modest decline through most of the 1980s.

Beginning in 1990 shortly after the harsh lessons of the price collapse completion rates increased dramatically to 77 percent. What was the reason for the dramatic increase? For that matter, what was the cause of the steady drop in the 1950s and 1960s or the reversal in the 1970s?

Since the percentage completion rates are much lower for the more risky exploratory wells, a shift in emphasis away from development would result in lower overall completion rates. This, however, was not the case. An examination of completion rates for development and exploratory wells shows the same general pattern. The decline was price related as we will explain later.

Some would argue that the periods of decline were a result of the fact that every year there is less oil to find. If the industry does not develop better technology and expertise every year, oil and gas completion rates should decline. However, this does will not explain the periods of increase.

The increases of the seventies were more related to price than technology. When a well is drilled, the fact that oil or gas is found does not mean that the well will be completed as a producing well. The determining factor is economics. If the well can produce enough oil or gas to cover the additional cost of completion and the ongoing production costs it will be put into production. Otherwise, its a dry hole even if crude oil or natural gas is found. The conclusion is that if real prices are increasing we can expect a higher percentage of successful wells. Conversely if prices are declining the opposite is true.

The increases of the 1990s, however, cannot be explained by higher prices. These increases are the result of improved technology and the shift to a higher percentage of natural gas drilling activity. The increased use of and improvements to 3-D seismic data and analysis combined with horizontal and and directional drilling improve prospects for successful completions. The fact that natural gas is easier to see in the seismic data adds to that success rate.

Most dramatic is the improvement in the the percentage exploratory wells completed. In the 1990s completion rates for exploratory wells have soared from 25 to 45 percent.

Oil and Gas Well Completion Rates
Click on graph for larger view


Oil and Gas Well Completion Rates
Click on graph for larger view

Oil and Gas Well Completion Rates
Development

Click on graph for larger view

U.S. Oil and Gas Well Completion Rates
Exploration

Click on graph for larger view


Workover Rigs - Maintenance

Workover rig count is a measure of the industry's investment in the maintenance of oil and gas wells. The Baker-Hughes workover rig count includes rigs involved in pulling production tubing, sucker rods and pumps from a well that is 1,500 feet or more in depth.

Workover rig count is another measure of the health of the oil and gas industry. A disproportionate percentage of workovers are associated with oil wells. Workover rigs are used to pull tubing for repair or replacement of rods, pumps and tubular goods which are subject to wear and corrosion.

A low level of workover activity is particularly worrisome because it is indicative of deferred maintenance. The situation is similar to the aging apartment building that no longer justifies major renovations and is milked as long as it produces a positive cash flow. When operators are in a weak cash position workovers are delayed as long as possible. Workover activity impacts manufacturers of tubing, rods and pumps. Service companies coating pipe and other tubular goods are heavily affected.

U.S. Workover Rigs and Crude Oil Prices
Click on graph for larger view



[WTRG's HOME PAGE]

Copyright 1996-2007 by James L. Williams

James L. Williams
Address your inquiries to:
WTRG Economics
P.O. Box 250
London, Arkansas 72847
Phone: (479) 293-4081