Recent orders

Drilling for Earthquakes (2)

Drilling for Earthquakes

By Anna Kuchment on March 28, 2016; Scientific American

Editor’s Note (11/7/2016): A magnitude 5.0 earthquake struck central Oklahoma on Sunday, Nov. 6, damaging buildings across the area. Experts have not yet tied the quake to oil and gas production, but the epicenter is near wastewater disposal wells, structures that have been implicated in earlier quakes in the state. 

To Cathy Wallace, the earthquakes that have been rattling her tidy suburban home in Dallas feel like underground thunderstorms. First comes a distant roar, then a boom and a jolt. Her house shakes, and the windows shudder. Framed prints on the walls clatter and tilt. A heavy glass vase tips over with a crash.

The worst moments are the ones between the rumble and the impact. “Every time it happens you know it’s going to hit, but you don’t know how severe it’s going to be,” she says. “Is this going to be a bigger one? Is this the part where my house falls down? It’s scary. It’s very scary.”

Until 2008 not a single earthquake had ever been recorded by the U.S. Geological Survey from the Dallas–Fort Worth (DFW) area, where Wallace has lived for more than 20 years. Since then, close to 200 have shaken the cities and their immediate suburbs. Statewide, Texas is experiencing a sixfold increase in earthquakes over historical levels. Oklahoma has seen a 160-fold spike in quakes, some of which have sent people to hospitals and damaged buildings and highways. In 2014 the state’s earthquake rate surpassed California’s.

The rise in quakes coincides with an increase in drilling activity. Wallace’s house, for instance, sits above the Barnett Shale Formation, a layer of hard black rock that holds the U.S.’s second-largest deposit of natural gas. Between 1998 and 2002 companies started drilling this deposit using hydraulic fracturing, or fracking, which involves pumping millions of gallons of water, plus sand and chemicals, into the ground at high pressure to crack the rock and release the gas. As the gas comes up the well so does the fracking fluid, along with volumes of brine so salty it is hazardous. The fluids are pumped back down a different hole drilled far below the shale into porous rock for permanent disposal. As more and more fluid is injected into these wastewater wells, pressure can start to build up on deep geologic faults. Eventually one can slip, causing an earthquake.

Researchers at the USGS and other institutions have tied earthquake surges in eight states, including Texas, Oklahoma, Ohio, Kansas and Arkansas, to oil and gas operations. Some state regulators have been slow to accept scientists’ findings. Residents have become increasingly angry, and environmental groups have sued. “This is a public safety issue, and there’s been a lot of denial and ignoring of the problem,” says Wallace, who has joined neighbors to push for the shutdown of nearby wastewater wells.

As scientists continue to study the phenomenon, they have found more reason for concern. Evidence suggests that earthquake risks can spread for miles beyond the original disposal sites and can persist for a decade or more after drilling stops. And although the biggest earthquake from wastewater injection was a 5.6 on the Richter scale, near Oklahoma City in 2011, scientists think that temblors as powerful as 7.0—enough to cause fatalities and damage buildings across a wide area—are possible, though unlikely.

THE FIRST SIGNS OF A LINK

Geologists have known since the 1960s that pushing fluids into the ground can set off quakes. In 1961 crews drilled a deep well at a chemical weapons plant outside Denver, known as the Rocky Mountain Arsenal. Within months after workers started pumping hazardous waste into the well, residents felt tremors. More than 700 small to modest-size quakes shook the ground between 1962 and 1966.

A local geologist, David Evans, noticed that the volume and pressure of the injections corresponded with earthquake rates. In a 1966 paper he concluded that the well was likely to blame for the quakes. “It is believed that a stable situation,” he wrote, “is being made unstable by the application of fluid pressure.”

The U.S. Army shut down the disposal well that same year. Yet the earthquakes continued and even grew stronger as pressure from the injections propagated belowground, encountering new faults and disturbing them. Matthew Hornbach, a geophysicist at Southern Methodist University (S.M.U.) in Dallas, compares the phenomenon with spilling a cup of water on a paper towel. “Even if you stop pouring, the water is still there spreading out, and it’s very difficult to stop it,” he says. The largest quakes, including one that reached magnitude 4.8—strong enough to knock objects off shelves but generally not damage buildings—struck in 1967 and then gradually petered out. Residents continued to feel smaller tremors until 1981.

The case intrigued seismologists, and the USGS set up an experiment based on it a few years later. In 1969 Chevron Oil allowed the USGS to use one of its wells to more closely study the effects of fluid pressure on faults. The well was in a seismically active zone of the Rangely oil reservoir in Colorado, and Chevron had been injecting water into the well to stimulate petroleum production. USGS scientists turned the injections on and off and followed the fluid pressure as it migrated through deep rocks. They came up with the exact injection pressure required to trigger quakes. When the pressure exceeded that level, earthquakes rumbled; when the pressure fell below the level, they quieted down.

The experiment showed that human-triggered earthquakes could be controlled by adjusting wastewater-injection pressure. Unfortunately, the lessons of Rangely and the Rocky Mountain Arsenal were apparently forgotten by the early 2000s, when fossil-fuel companies embarked on the shale-gas boom. “Scores of papers on injection-induced earthquakes were published in the geophysical literature in the following 40-plus years, and the problem was well understood and appreciated by seismologists,” says Bill Ellsworth, a Stanford University geophysicist who launched his career at the USGS while the Rangely experiment was under way. He believes professional skepticism slowed the formation of a consensus. “There were a lot of doubts expressed by very good petroleum engineers that [earthquakes caused by injection wells] were even possible,” he says. “Knowledge of the whole physical process was either lost or had not been effectively communicated to a broad community.”

IT STARTED IN TEXAS

Soon after more aggressive drilling began in Texas and Oklahoma shales, reports of quakes started coming in. On October 30, 2008, Dallas–Fort Worth residents called 911 to report loud booming noises accompanied by shaking walls and furniture. Many wondered if something had blown up.

Seismologists Cliff Frohlich of the University of Texas at Austin and Brian Stump of S.M.U. investigated. They put out numerous seismometers and recorded more than 180 small earthquakes between October 30 and May 31, 2009. They then found that a major gas producer had recently drilled a wastewater well at Dallas–Fort Worth International Airport, less than half a mile from the center of the quake cluster. “On the basis of time and spatial correlations, we conclude the DFW sequence may be the result of fluid injection at the SWD [saltwater-disposal] well,” they wrote in a paper published in March 2010 in The Leading Edge.

The paper, Frohlich says, might have been ignored were it not for several other events. In June 2009 another set of quakes had rustled a small industrial city south of Fort Worth. A few months later stronger quakes shook the towns of Guy and Greenbrier in Arkansas. In March 2011 the typically steady ground of Ohio started shifting as 12 quakes rattled the Youngstown area.

From their offices in Menlo Park, Calif., USGS scientists noticed that something unusual was happening. “On a daily basis, we could see there were earthquakes occurring in places where they didn’t belong,” says Ellsworth, who worked at the USGS until 2015. He began to look broadly at earthquake rates in the U.S. and discovered an unsettling pattern: Between 1967 and 2000 the average rate of earthquakes east of the Rocky Mountains was 21 per year. Between 2010 and 2012 the rate had jumped to 100. He plotted the data and presented them at a scientific conference. “It generated a lot of interest,” he says, from the general public as well as from scientists in industry and academia.

DRILLING SIDEWAYS

Behind the rise of earthquakes is the rise of wastewater injection wells. And behind those wells is a technological breakthrough known as horizontal drilling. The technique allows operators to drill wells vertically and then bend them 90 degrees like flexible straws. Instead of drilling right through a gas deposit that is 300 feet thick but miles across, these wells can turn when they are inside the deposit and run for thousands of feet, collecting significantly more gas and oil.

With that gas and oil, however, come vast quantities of very salty water. “The oil and gas business is really a water-handling business,” says Scott Tinker, Texas’s state geologist and director of the University of Texas at Austin’s Bureau of Economic Geology. The water comes from the same rocks as the oil and gas. All three are remnants of ancient seas that heat, pressure and time transformed. “The pore spaces, or tiny holes, in the rock remain filled with these ancient oceans, so when we drill wells today that water is produced to the surface,” Tinker says. Although the water is natural, it can be several orders of magnitude more saline than seawater and is often laced with naturally occurring radioactive material. It is toxic to plants and animals, so operators bury it deep underground to protect drinking-water supplies closer to the surface.

A legendary Texas natural gas baron named George Mitchell, who died in 2013, was the first to tap the Barnett shale using hydraulic fracturing. Oklahoma’s Devon Energy combined horizontal drilling with hydraulic fracturing to extract even more gas. The technique soon caught on across Texas, Oklahoma and other petroleum-producing states.

As drilling proliferated, U.S. shale-gas production rose steeply, from 1.3 trillion cubic feet in 2007 to 5.3 trillion in 2010 and 13.4 trillion in 2014. The volume of wastewater that was brought to the surface and had to be disposed of soared, too.

In Texas the amount of water pumped into wastewater wells grew from 33.8 million barrels per month in 2007 to 81.1 million in 2014. In Oklahoma it nearly doubled from 849 million barrels per month in 2009 to 1.54 billion in 2014. Soon regular injection wells were not large enough, and operators turned to so-called high-volume injection wells with names like “Deep Throat.” Many absorbed more than 300,000 barrels of water per month.

OKLAHOMA LIGHTS UP

As the quakes mounted, scientists moved from loosely associating them with wastewater injection to deducing a more direct link. In 2011 University of Oklahoma seismologist Katie Keranen returned from a field study in Alaska with half a dozen seismometers in tow. As they stood in storage in her basement lab, a magnitude 4.8 earthquake jolted the town of Prague, about 60 miles east of Oklahoma City. No sooner had she and her students put out the instruments than a 5.6 quake rocked the same town. To date, that November earthquake is the largest event related to wastewater injection, according to the USGS. It injured two people, destroyed 14 homes, buckled parts of a highway and was felt in at least 17 states.

In 2011 a magnitude 5.6 quake shook homes in Prague, Okla., knocking down Sandra and Gary Landra’s chimney, which struck Sandra (1). It also cracked their basement floor (2). More quakes in the state since then have prompted residents to protest against disposal wells, linked to the Prague temblor and others (3). Scientists are installing additional seismographs to increase their data, in some cases powered by solar panels (4).

Keranen and her students recorded that temblor, and hundreds of aftershocks, and used the data to publish papers in Geology and Science. For the Geology paper, published in March 2013, Keranen and her colleagues created a geophysical model to estimate how quickly fluid pressure could build up underground and how far it could spread. It showed that the pressure was likely strong enough to have caused the first earthquake, which then set off a domino effect; stress changes from the first rupture caused nearby faults to slip. The Science paper, released in July 2014, tied four high-volume injection wells to a cluster of earthquakes in Jones, just west of Prague.

Keranen compares the movement of fluid and pressure through the earth’s subsurface to water filling a vase that someone broke and glued back together. “If you have high enough pressure, the fluid could just force its way down the fractured pathways,” she says. The pressure counteracts the friction that holds faults together and allows them to slip apart—a phenomenon known as an induced earthquake.

Unconvinced, the Oklahoma Geological Survey (OGS) issued a statement disputing Keranen’s findings in the Geology paper. “Our point was just that it looked like a natural earthquake, and there was no reason to call it induced,” says Randy Keller, who was director of the survey until he retired at the end of 2014. The statement, signed by Keller and Oklahoma’s state seismologist at the time, Austin Holland, pointed to evidence of historical natural earthquakes in the area.

Keranen was surprised by the response but now thinks her reaction was naive. “I have more appreciation for the fact that they wouldn’t necessarily believe one report,” she says. “They wanted to see the bulk of the scientists and multiple studies point in that direction.” Still, she felt frustrated that Oklahoma did not quickly slow down or stop injections in some wells, a step the state did not take on a wide scale until early 2015. She says she also received pushback from administrators at her university who were not convinced that a link between disposal wells and quakes could be demonstrated. In mid-2013 Keranen left the University of Oklahoma for Cornell University.

RUMBLES NEAR FORT WORTH

A few months after Keranen’s paper came out, Texas started shaking again. This time quakes struck two rural towns northwest of Fort Worth—Azle and Reno, in one of the densest areas of oil and gas development.

By this time S.M.U. had hired several new geophysicists, who joined Frohlich and Stump’s investigation. Heather DeShon, a seismologist, deployed seismic stations and began mapping faults underneath the towns. Hornbach, along with Stanford’s Ellsworth, began studying lake, river and aquifer levels to see if North Texas’s drought could have altered stresses on faults. The team also collected data on nearby saltwater-disposal wells and built a 3-D model to simulate pressure from injection wells and estimate how it would move through underground rock. Their conclusion: wastewater injection from two nearby wells was the most likely cause of the earthquakes.

Even before the study was published in April 2015, state regulators began questioning its findings. After I sent an embargoed version of the S.M.U. paper to Craig Pearson, a seismologist who works for the Railroad Commission of Texas (RRC)—the state agency that regulates oil and gas—he responded with a statement saying the research raised “many questions with regard to its methodology, the information used and conclusions it reaches.” But he declined to elaborate before meeting with the paper’s authors.

The RRC (its name is a historical artifact) is overseen by three commissioners. One received campaign contributions from an oil-company political-action committee, and the two others received contributions from the CEO of EnerVest, one of the two operators implicated in the S.M.U. study. Nevertheless, “regulatory decisions are made based on science, data and best practices to ensure protection of public safety and our natural resources,” wrote Gaye McElwain, an RRC spokesperson, in a statement to Scientific American.

The RRC did eventually summon both well operators to full-day hearings in Austin to demonstrate why their wells should not be shut down. “As a result of those highly technical hearings, based on scientific data and evidence presented, it was determined the operators were not contributing to seismic activity,” McElwain wrote. By September 2015, when the commission issued its ruling, the volumes of wastewater being injected in the vicinity of the earthquakes had been reduced, the earthquakes had died down and the well operators were officially allowed to continue business as usual.

OHIO SAYS STOP

Other states have reacted differently. After a series of tremors disturbed the residents of Youngstown, Ohio, in 2011, the state shut down nearby injection wells and installed additional seismic stations to detect earthquakes too tiny to be felt. It established new rules dictating that a quake as small as magnitude 2.0, about 10 times too weak to create noticeable shaking, would trigger well shutdowns and investigations. Ohio’s earthquakes peaked at 11 in 2011 before decreasing to four in 2015, according to USGS data.

Kansas also responded relatively quickly. Rex Buchanan, interim director of the Kansas Geological Survey, was watching a Kansas City Royals game in September 2014 when his cell phone started buzzing with alerts from the USGS. Tremors were shaking south-central Kansas near the state’s border with Oklahoma. This was not a surprise, because more than 100 earthquakes had visited Kansas during that year, up from an average of one every two years. But the tremors were growing stronger and soon reached magnitude 4.2. Kansas governor Sam Brownback convened an induced-seismicity task force to evaluate the quakes. The task force, chaired by Buchanan, recommended restricting injection volumes within five seismic zones across two counties.

How were Kansas officials able to reach a consensus? “I don’t think we could come up with any other explanation,” Buchanan says. “You see a level of activity like we saw: a dramatic, dramatic increase, and in almost exactly the place where the really large-volume wells are going in—and where you see the same correlation in Oklahoma. It’s pretty hard to come to any other conclusion.” He adds that he and his colleagues had the benefit of watching science and regulations develop in Ohio, Texas and Oklahoma. So far the measures Kansas took seem to have had an impact. “Certainly our activity has been down lately,” he says, in terms of both earthquake rates and size. “But I have pressed people real hard not to take the approach that this is some sort of problem solved, because it’s not.”

The reduced activity is at least partly related to the currently low price of oil, which has prompted some operators to drill less and therefore to produce less wastewater. But prices will eventually rise again, Buchanan says, and he wants to be ready “so we don’t have to go through this again.”

HOW BIG CAN QUAKES GET?

Engineers who set building codes and officers at insurance companies need to know where the next induced earthquakes will strike and how big they will be. To find out, geologists at the USGS’s Earthquake Hazards Program are analyzing the rates of the induced quakes that have been multiplying across the U.S. and how induced earthquakes shake the ground differently from natural ones.

USGS scientists have found that ground movements from induced quakes are stronger just above the epicenter but less so away from the immediate area, possibly because they tend to be shallower than natural ones. Because the uppermost layers of the earth’s crust east of the Rocky Mountains are denser than those in California, however, they transmit energy efficiently, and induced quakes can still be felt at great distances.

Next, the group had to come up with a maximum magnitude for these temblors: How strong could they get? After comparing central U.S. earthquakes with tremors in geologically similar parts of the world—and noting that induced quakes, so far, tended to rupture either smaller faults or smaller sections of faults than West Coast quakes—they settled on an upper limit of magnitude 6, which can damage even well-built structures. “But we can’t rule out quakes of magnitude 7 and above,” says Mark Petersen, chief of the National Seismic Hazard Mapping Project. Because scientists have evidence in the prehistoric record of quakes that large in the Texas-Oklahoma region, the USGS’s new maps include a low-probability chance for that possibility.

Finally, the geologists had to work out the time period over which to make a reasonable earthquake forecast. They settled on a one-year forecast based on the previous year’s earthquake rate and put that information in a series of maps. “It’s kind of like the weather,” Petersen says. “If it rained today, it’s more likely that it will rain tomorrow.”

The USGS issued the maps on March 28 of this year. The computer models used to generate the maps also estimate where, how often and how strongly ground shaking from an earthquake could occur, so that residents, engineers and city planners can see the likelihood that their community will experience a damaging earthquake over the next year.

CURBING THE THREAT

To many Oklahomans, it is clear that that risk has risen sharply. Data back up their experiences. The earthquake rate in the state has grown at an astounding pace. In 2013 the state recorded 109 quakes of magnitude 3 and greater. The following year the number jumped to 585, and in 2015 it reached 890.

The escalation prompted two unusual warnings jointly issued by the USGS and the OGS in October 2013 and May 2014. Seismologists stated that Oklahoma had a significantly increased chance of seeing a damaging magnitude 5.5 temblor. “It was the first time I think we’d ever issued an earthquake advisory east of the Rockies,” says Robert Williams, the USGS central and eastern U.S. coordinator for earthquake hazards.

Scientists such as Keranen and Mark Zoback, a geophysicist at Stanford, are producing even more detailed analyses of why quakes happen so frequently in some places but less—or not at all—in others. For example, North Dakota, the second-largest crude oil–producing state after Texas, has logged only one earthquake in the past five years. One possibility is that fluid pressure has not yet built up strongly enough to cause quakes. And only a fraction of faults may have the necessary orientation, relative to natural stresses in the earth’s crust, that is conducive to a slip.

As the science has advanced, so have regulations. The OGS formally declared in April 2015 that disposal wells were triggering its quakes. “The OGS considers it very likely that the majority of recent earthquakes, particularly those in central and north-central Oklahoma, are triggered by the injection of produced water in disposal wells,” it wrote in a statement.

Since then, the state has asked that more than 600 disposal wells operating in quake-prone areas cut injection volume by 40 percent below 2014 levels. Although it is too early to know if the actions will have a lasting effect, Jeremy Boak, director of the OGS, says he is starting to see declines in earthquake rates in the areas where injections have been reduced. Overall, the state saw an uptick in stronger quakes at the beginning of 2016, however. Stanford’s Ellsworth does not offer policy prescriptions but wonders if the volume reductions in Oklahoma will be sufficient. “If you pump less, you’re still pumping,” he says, “and it doesn’t guarantee you won’t encounter a fault and cause an earthquake.”

Many wonder why Oklahoma waited until 2015, after the state had experienced more than 750 earthquakes over seven years, to take significant action.

Matt Skinner, a spokesperson for the Oklahoma Corporation Commission, which regulates oil and gas in the state, says the agency had been shutting down individual wells and taking other steps to manage earthquake risk since 2013. The agency did not take wider action until last year, he says, because by that time researchers had published more scientific studies showing how far fluid pressure could travel from a wastewater well. “The issue changed from which well do we take action on to what group of wells do we need to take action on to reduce potential risk?” Skinner says.

Keller, the retired OGS director, says he was also aware of the state’s economic dependence on oil and gas. “We were absolutely slower than those that were quick to pull the trigger,” he says. “We were sitting there trying to balance the economic impact and trying to not push the panic button and at the same time trying to be responsible. It was not an easy task to figure out what to do.” For well operators, a volume reduction means a loss of income and the possibility of having to truck wastewater over long distances to other facilities.

Texas has introduced new measures to monitor earthquakes. Last year the state allocated nearly $4.5 million for the installation of a seismic network and additional earthquake research. Over the past two years the Railroad Commission has also given itself new powers to shut down wells and ask operators to perform tests in areas of new seismic activity. Although the agency has expressed concern about the quakes, it has not yet formally concluded that any have been triggered by energy production.

Other mitigation strategies that states and oil and gas companies are exploring include recycling the wastewater or injecting it into layers of rock that are farther removed or isolated from deep faults. They may also space injection wells farther apart.

To the oil and gas industry, a moratorium on injections—even in a single broad area—is unacceptable. “A ban on injection is a ban on oil and gas production,” says Steve Everley, a spokesperson for Energy InDepth, which is part of the Independent Petroleum Association of America. That is because there are not yet any cost-effective alternatives to injection, and many come with their own environmental costs, such as trucking water over longer distances, he says.

Even if Oklahoma shut down all its wells today, many experts say the quakes would continue. “We’re trying to calculate how much energy is in the system right now and how long it may continue on—and at the current earthquake rate the numbers are very big,” says Daniel McNamara, a seismologist at the USGS Geologic Hazards Science Center in Golden, Colo.

Pressed for details, he paused. Then he added: “It’s hundreds of years.”

This article was originally published with the title “Drilling for Earthquakes” in Scientific American, 315, 1, 46-53 (July 2016)

doi:10.1038/scientificamerican0716-46

Factors and Cluster Analysis

Factors and Cluster Analysis

Student’s Name

Institution

Factors and Cluster analysis

Introduction

Factors analysis is a technique that allows the reduction of a bigger amount of correlated variables to a smaller amount of latent dimensions.The purpose of reflective and formative factors in desertification research is to help students in improving their result analysis skills. The main difference between reflective and formative factors originates from the direction flow of casualty between the latent flow and its indicators. In the reflective factors, constructs of their models become the common effect on all indicators. Any modification that takes place on the latent variable also affects all the items (Romesburg, 2004). In formative factors, the models adopt a different measurement to justify the constructs-items interrelationship as shown in the flow chart below. Here, the model displays a set of dissimilar causes with each cause representing a small portion of the whole construct (Romesburg, 2004).In addition, the other different between the two factors can be shown in the behavior of their models particularly on the impacts of changes made on their indicators. In reflective model, eliminating one or more indicators does not affect latent variable (Romesburg, 2004) On the other hand, changing or removing one indicator will result in diminishing the scale of validity in the formative model.

Similarities between Explanatory Factor Analysis and Confirmatory Factor Analysis

Both exploratory factors analysis and confirmatory factor analysis derive their explanation from a common factor model. The common factor model explains that each observable response is partly influenced by some underlying common factors and partly some unique factors. In common factor model, the force of the link between each factor and each measure fluctuates. They fluctuate in such a way that a given factor can influence some measures in a very powerful way as compared to other factors (Dwyer, Gill & Seetaram, 2012).The two-factor analyzes are carried out by examining the correlations patterns found in between the observed measures. Highly correlated measures, whether negative or positive have the likelihood of being influenced by the similar factors, while measures that are uncorrelated are influenced by different factors.

Differences between Exploratory and Confirmatory Factors Analysis

The differences between these two types of factors analysis can be explained based on the roles they play. Factor analysis always tries to explain the nature of the constructs that influence a group of possible responses. On the other hand, confirmatory factors of analysis are used when testing whether a precise set of constructs is manipulating the outcome in a particular way. The difference is also seen in terms of their objectives. The goals of explanatory factors analysis are to find out the number of similar factors influencing the measures. They can as well as be used in testing the power of relationships existing between the factors and observable measure (Romesburg, 2004). Explanatory factors are performed using seven steps while confirmatory factors are performed using six steps.

Similarities between Exploratory and Confirmatory Cluster Analysis

Exploratory and confirmatory analysis use variables that containing dissimilar measurable levels. The variables exploratory analyses are incompatible with each other hence must be transformed to cluster analysis (Wang, 2009).

Differences between Exploratory and Confirmatory Cluster Analysis

In explanatory cluster analysis, the amount of clusters is not known while the quantity of cluster is known in confirmatory cluster analysis. Therefore, cluster’s number identification is needed in explanatory cluster analysis and the number of clusters does not need to be identified in confirmatory cluster analysis.In explanatory clusters analysis, the cluster characteristics are unknown. Examples of characteristics here may include the center of clusters. On the other hand, the characteristics of confirmatory clusters analysis are partially known (Wang, 2009). This, therefore, means that getting a precise interpretation is not easy and hence there is a need for interpretation in explanatory cluster analysis. Conversely, cluster analysis is easy hence does not call for interpretation. Fit to data in explanatory cluster analysis is always maximized while it may be poor in confirmatory cluster analysis.

Uses and Misuses of Explanatory Factors Analysis and Confirmatory Analysis

Uses

The primary use of factors analysis is to find out if the small quantity of underlying factors can explain the results obtained from multiples tests.Exploratory factor analyses are used in various ways, for instance, when identifying the most important features in classifying a grouped data. They are also used in demonstrating the directions of measurable scale. They also helped in determining clusters of items hanging in questionnaire, and lastly they are used in determining the nature of constructs responses in a particular area (Wang, 2009).Confirmatory factors analyses are on the hand used in determining the validation of models. They also help in testing the relationship between the different models. This type of factors analysis is used in testing the correlation between sets of data. Confirmatory factors analysis helps in matching the capability of different model to account for the similar set of data.Cluster analysis is used to bring together cases instead of variables; therefore, it can be used in population segment. Here, the cluster analysis acts as the only target group. In confirmatory factors analysis, there may be a poor result in a fit to data (Wang, 2009).Confirmatory factors analysis methods are not found in standard software like SSPS that offers limited rudimentary confirmatory analysis. In SSPS, all the initial values must be freed for estimation. This is what limits the usage of confirmatory factors analysis. Here, fixing some parameters is not possible.

Misuses

Most researchers have studied and found that formative and confirmatory factors analysis structural effects of both may sometimes create many problems for the researchers. This problem may arise because of the nature of items that are contained in their models, which always vary with the outcome variable. The reverse is true. This makes researchers to be careful in the type of model they choose for their studies.

Relative Importance of Explorative Factor Analysis and Confirmatory Factor Analysis

Because there are many controversies surrounding the choice of analysis to use for a specific study, it is mostly advisable to assess their advantages and disadvantages before deciding on the one to choice. Selection of a model without previously analyzing the one that suits the better result may result into damage of the result of the study. So it is always important to take into account that the model that fits your study because each model has its degree of importance. In any case one is interested in using Exploratory Factor Analysis and he lacks basic theories underlying the constructs, this model can provide that opportunity. This problem can also be solved using Confirmatory Factor Analysis models.In any case a Confirmatory Factor Analysis has been performed and it provides no significance, it is normally allowed to follow and correct the inconsistency in Confirmatory Factor Analysis. This is done using Exploratory Factor Analysis. This gives an opportunity of testing modifications one wants to make in a new model on a new data.When using different sets of data, it is always reasonable to use an Exploratory Factor Analysis in generating theories pertaining the constructs that are underlying the measures. A Confirmatory Factors Analysis can then follow this. In this case, one is merely fitting the data and not doing the actual theoretical testing of constructs (Romesburg, 2004).This is only possible where the results of Explorative Factor Analysis are put directly in Confirmatory Factor Analysis on the similar data. The well-accepted procedure here is to perform half of Exploratory Factor Analysis on a data. After that, it is followed by testing the generality of the extracted factors with a Confirmatory Factor Analysis on the other half of the data.Both Exploratory Cluster Analysis and Confirmatory Cluster Analysis are useful because they can help in grouping the clustered data instead of variables (Dwyer, Gill & Seetaram, 2012). This enables the segmentation of the data being analyzed.The interventions needed in both cluster analysis seems to be very clear. This is possible because each sub-cluster will only be interpreted on its own as opposed to the whole cluster. As a result, it makes general intervention very simple because adjustment is done on individual clusters.

What Factor Analysis vs. Clusters Analysis Does and cannot Do

Factors analysis can be used in the interpretation of a set of items in a questionnaire. In a case where the researcher has got different categories of models contained in the questionnaire, he can use factors analysis in testing different hypotheses contained in those models (Dwyer, Gill & Seetaram, 2012). This is made possible because confirmatory factor analysis provides an alternative hypothesis that ensures there is a match of items in the models and confirms that these categories of models matches with the variance used in the research. Alternatively, factors analysis cannot be used in SPSS systems. Factors analysis of numerous small studies cannot be used to represent the result of a bid single one.Relative Factor Analysis can be used to derive a theory of a contract underlying the particular measures. Confirmatory Factor Analysis can follow this. For this to be obtained, the test must be done using different sets of data. In case someone get a significant fit while performing a Confirmatory Factor Analysis, it is advisable to follow the inconsistency between the data using Reflective Factor Analysis. In general, factor analysis is used in exploring the sets of data that help in identifying hypothesis. They are also to reduce many variables to a small and a manageable manner (Dwyer, Gill & Seetaram, 2012).Cluster analysis is usually used after factor analysis and is used in identifying the groupings. It depends on discriminate analysis to confirm if the groupings are arranged statistically between the models and also used in identifying the significance of variable between the groups. Cluster analysis cannot be used to test the goodness of fit of models or it cannot be used in knowing the important of a model.

References

Dwyer, L., Gill, A., & Seetaram, N. (2012). Handbook of research methods in tourism: Quantitative and qualitative approaches. Cheltenham: Edward Elgar.

Romesburg, H. C. (2004). Cluster analysis for researchers. North Carolina: Lulu Press.

Wang, Y. (2009). Statistical techniques for network security: Modern statistically-based intrusion detection and protection. Hershey: Information Science Reference.

DRILLING ANG CUTTING

Drilling and Cutting

History:

Drilling refers to the technique of making holes into items such as wood, stone or into the ground. Drilling and cutting dates back to the ancient times, 25000 years BC, by use of simple methods of rotation. Drilling methods were developed among the ancient people in Egypt, Asia and America (Moloney, 1995, P.146; McCveire, 1891). At about 1000 BC, a drilling rig made of bamboo wood and a percussive cable was proposed in the ancient China (Moor, 1977). The drilling rig was improved by Leonardo da Vinci’s to become more efficient. The drilling devices were hand powered and used water for lubrication and cooling purposes. Water was also used to remove the broken remains of the drilled hole.

During the ancient period, drilling a small hole required a lot of efforts and long hours. The primitive drilling methods used are the awl and the hand drill. The awl is a sharp stone or copper, attached to a piece of wood. While drilling, the awl was pressed against the object to be drilled and then rotated by hand. Since drilling into a hard surface such as stone was highly labor intensive, efforts to invent more efficient drilling machines became active. Therefore, efforts towards mechanization led to development of strap drills, bow drills and pump drills. Over the years, drilling and cutting has evolved greatly, from simple drill bits made of stone, bronze and iron in 3000 BC, to invention of a triple bow drill in 1450 BC. Currently, electric-powered drilling and cutting machines that are faster and more efficient are in use.

Advantages and disadvantages of drilling and cutting:

Advantages of drilling and cutting include facilitating access to water, gas and oil from deep in the earth. Drilling also simplifies the process of building and construction. Drilling is a fast process that is used to penetrate hard rock.

Disadvantages of drilling and cutting include risk of injury to personnel and high cost of implementation. Drilling process requires skilled personnel to operate.

Safety methods of drilling and cutting:

Percussion drilling involves lifting and dropping a heavy cutting tool which removes materials to form a hole. The drilling tool can be attached to a cable to facilitate its use. Hand auger drilling is most suitable for unconsolidated deposits.

Hand auger drilling is used by rotating the auger head into the ground to bore the hole. The auger head is then slowly removed to get rid of the excavated materials.

Sludging also known as reverse jetting is a drilling technique that uses a hollow drill pipe made of bamboo or steel. The drill pipe has teeth at the bottom that cuts the hole as it moved up and down. Water is then pumped down the borehole and out through the drill pipe to remove the debris.

Rotary-percussion drilling is the main technique used in regions with very hard rocks. The hard rock is first crushed to soften using a pneumatic hammer. The drilling machine is then driven into the rock at high speed using compressed air.

Examples:

Drilling cannot occur without cutting. Drilling machines use cutters to produce special holes. Examples of cutters used include countersinks and counterbores. Countersinks are cutting machines with special angles. They are used to counter sink angles for flat head screws. Counter bores are special cutting machines that use a pilot to guide the cutting process.

References

McCveire, J.E. (1891). A study of the primitive methods of drilling.- Report of National Museum, Washington, 9l, 623-756.

Moloney, N. (1995). Archaeology. Oxford University Press, 146.

Moor, W.D. (1977). Ingenuity sparks drilling history. Oiland Gas Journal, 35, 159-164, 175-177.