Pollution control handbook for oil and gas engineering

Free download. Book file PDF easily for everyone and every device. You can download and read online Pollution control handbook for oil and gas engineering file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Pollution control handbook for oil and gas engineering book. Happy reading Pollution control handbook for oil and gas engineering Bookeveryone. Download file Free Book PDF Pollution control handbook for oil and gas engineering at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Pollution control handbook for oil and gas engineering Pocket Guide.

A 0 6 1 4 Regulation relating to the collection of fees payable to treasury for supervision of the petroleum activity Stavanger: Norwegian Petroleum Directorate A 0 6 1 7 United Kingdom oil and gas taxation and accounting Lilley P et al editors London: Profesional Publishing [loose leaf updating]. A 0 6 2 8 Manual of legislative acts, rules and guidance notes concerning North Sea offshore developments - UK sector Brentford: Weston Law Manual Services , 4 volumes loose leaf [Regularly updated]. A 0 6 3 3 The oil supplies industry: a comparative study of legislative restrictions and their impact Cameron P London: Financial Times Business Information A 0 6 3 5 Regelverksamling for petroleumviriksomheten A 0 7 0 2 Oil and gas traps: aspects of their seismostratigraphy, morphology and development Jenyon M K Chichester: Wiley , pages.

A 0 7 1 5 Wellsite geological techniques for petroleum exploration: methods and systems of formation evaluation Sahay B et al Rotterdam: A A Balkema , pages. A 0 7 1 6 Carbonate reservoir characterization: a geologic-engineering analysis, part 1 Chilingar G Vetal editors Amsterdam: Elsevier in preparation. A 0 7 3 2 Field geologists' training guide: an introduction to oilfield geology, mud-logging and formation evaluation WhittakerA Englewood Cliffs NJ: Prentice Hall , pages.

A 0 7 6 8 First and second international forum on reservoir simulation; Alpach, September and September P Steiner coordinator Loeben: P. Steiner , pages. A 0 8 0 5 Well test analysis for naturally fractured reservoirs Prat G da Amsterdam: Elsevier , pages. A 0 8 2 9 Drilling symposium th annual energy-sources technology conference; Houston, January PD vo! A 0 8 8 2 Recommended practice for training and qualification of personnel in well control A 0 8 8 3 Surface production operations, volume 1: design of oil-handling systems and facilities Arnold K and Stewart M Houston: Gulf Publishing , pages.

A 0 8 8 4 Surface production operations, volume 2: design of gas-handling systems and facilities Arnold K and Stewart M Houston: Gulf Publishing , pages. A 0 9 0 4 Standards, specifications, recommended practices comprehensive documentation Washington DC: American Petroleum Institute Regularly updated [available in Europe through Infonorme]. A 0 9 1 0 Offshore facilities and design: opportunity for the future, conference; London December Rugby: Institution of Chemical Engineers , 4 papers. A 0 9 2 5 Recommended practice for qualification programs for offshore production personnel who work with anti-pollution safety devices, revised edition Washington DCL: American Petroleum Institute [RPT-2].

A 0 9 2 6 Recommended practice for training and qualification of personnel in well control A 0 9 3 8 Inhospitable waters A 0 9 4 0 Lessons in rotary drilling, unit V lesson 8: orientation for offshore crane operations University of Texas at Austin, Petroleum Extension Service , 35 pages.

Offshore helicopter landing areas: guidance on standards London: Civil Aviation Authority c , 48 pages. A 0 9 4 7 Regulations for cranes on production installations in Norwegian territorial waters with unofficial translation Stavanger: Norwegian Petroleum Directorate , 20 pages. A 0 9 5 6 Dynamic positioning systems: principles, design and applications FayH Paris: Editions Technip , pages [text in English].

A 0 9 6 3 The interface between supply and service vessels and offshore installations: safety study for the Department of Energy: final report, December [unclassified version of Report OT-R dated February ] Camberley: Easams [OT-R]. A 0 9 6 8 Operating experience with offshore workboats and support vessels; Newcastle upon Tyne, 30 March London: Society for Underwater Technology , 23 pages.

Tanker safety guide liquefied gas , 2nd edition International Chamber of Shipping London: Witherby A 0 9 7 6 Tentative rules for the construction and classification of dynamic positioning systems for ships and mobile offshore units Hovik: Det norske Veritas , 40 pages. A 0 9 8 4 Guidelines for operators on naturally occurring radioactive substances on offshore installations London: United Kingdom Offshore Operators Association ,16 pages. A 0 9 8 5 Hearth, safety and environment in oil and gas exploration and production, 1st conference: The Hague, November Richardson TX: Society of Petroleum Engineers , 2 volumes [Publication ].

A 1 0 0 2 The offshore health handbook: a practical guide to coping with injury and illness, rev ed Norman J Nelson and Brebner J London: Martin Dunitz , pages. A 1 0 0 6 Offshore medicine: medical care of employees in the offshore oil industry, 2nd edition Cox R A F editor Berlin: Springer-Verlag , pages.

Brief Records : EPA National Library Catalog

A 1 0 2 7 Quality assurance and certification of safety and pollution prevention equipment used in offshore oil and gas operations New York, NY: American National Standards Institute. Recommended practice for analysis, design, installation and testing of basic surface safety systems on offshore production platforms, 4th edition Dallas: American Petroleum Institute [API RP14C] A 1 0 2 9 Recommended practice for occupational safety for oil and gas well drilling and servicing operations, 2nd edition Washington DC: American Petroleum Institute [RP 54].

A 1 0 3 9 Safety in offshore drilling: the role of shallow gas surveys, seminar; London November London: Society for Underwater Technology A 1 0 4 1 Safety in the offshore petroleum industry: the law and practice for management Barrett B et al London: Kogan Page , pages. A 1 0 4 6 Safety representatives and safety committees on offshore installations: a brief guide for the workforce London: Department of Energy , folded leaflet.

A 1 0 5 2 Working conditions and working environment in the petroleum industry, including offshore activities Geneva: International Labour Office , pages [Petroleum Committee, 9th Session]. A 1 0 6 8 Guidelines for detection and control of hydrogen sulphide during offshore drilling operations London: UK Offshore Operators' Association , 16 pages.

This makes it difficult for an applicant to judge what an agency considers to be cost effective. Precedent, both locally and nationally, can be used as a guide. This means that the control technology with the highest degree of pollutant reduction must be evaluated first. If that technology meets all of the criteria for feasibility, cost effectiveness, air quality impacts, and other impacts, it is determined to be BACT for the proposed new source and the analysis ends.

If not, the next most effective control technology is evaluated for BACT. Established NSPS have already been based on a thorough technology and economic evaluation. The potential technologies include those that are used outside of the U. This step takes some research. Agencies use this database to post permit application emission limits, technologies, and restrictions for other agencies and applicants to use for just this purpose.

Technical books and journals are another common source of technology information. Internet searches now can be quite valuable. Environmental consultants often are hired to prepare permit applications because of their prior knowledge of control technologies and permit application requirements. Less common, but acceptable, sources of information include technical conference reports, seminars, and newsletters. Agencies typically expect a reasonably complete list, but it does not have to be exhaustive. Agency personnel use their own knowledge of the industry to check for completeness, and will request additional information for a potential technology if they believe a good possibility has been left out.

To be technically feasible, the control technology must be available and applicable to the source. Technical infeasibility may be based on physical, chemical, or engineering principles. Technical judgment must be used by the applicant and the reviewing agency in determining whether a control technology is applicable to the specific source. For example, while modifying a process heater, low-NOx burners might not be able to be retrofit into the heater due to its configuration.

The longer flame length produced by low-NOx burners may impinge on the back wall of the heater. Demonstrating unresolvable technical difficulty due to the size of the unit, location of the proposed site, or operating problems related to specific circumstances could show technical infeasibility. However, when the resolution of technical difficulties is a matter of cost, the technology should be considered technically feasible, and let the economic evaluation determine if the cost is reasonable.

There may be instances where huge cost can solve many problems. While seemingly straightforward, two key issues must be addressed in this step. The first is ensuring that a comparable basis is used for emission reduction capability. Calculating potential emissions on a basis of mass per unit production, i. A more difficult problem arises for control techniques that can operate over a wide range of emission performance levels, depending on factors such as size e. Every possible level of efficiency does not have to be analyzed. Again, judgment is required. Recent regulatory decisions and performance data can help identify practical levels of control.

This reduces the need to analyze extremely high levels in the range of control. By spending a huge amount of money and installing a huge electrostatic precipitator, an extremely high particulate removal efficiency is theoretically possible. Demonstrated performance is critical to the evaluation of available technology.

Also, it generally is presumed that demonstrated levels of control can be achieved unless there are source-specific factors that limit the effectiveness of the technology. This eliminates the need to analyze lower levels of a range of control. Both adverse and beneficial effects should be discussed, and supporting information should be presented. Thermal oxidation of volatile organics in a concentrated gas stream may be an example of energy production if that energy can be recovered and put to use.

Energy and energy conservation is considered by the government to be important to the economy of the nation, and the energy analysis assures that energy has been considered. However, quantitative thresholds for energy impacts have not been established, so it is difficult to reject a technology based on energy alone. There may be a circumstance where the required energy is not available at the proposed source.

The cost of energy will be considered separately in the economic evaluation. The energy evaluation should consider only direct energy consumption by the control technology. Indirect energy consumption that should not be considered includes the energy required to produce the raw materials for the control technology e. A detailed discussion of pollutant concentration in ambient air must be provided separately in the air permit application. This discussion of environmental impacts includes topics such as creation of solid or hazardous waste e.

It also contains an evaluation of impacts on visibility. Both positive and negative impacts should be discussed. That a control technology generates liquid or solid waste or has other environmental impacts does not necessarily eliminate the technology as BACT. The focus is the cost of control, not the economic situation of the facility. The primary factors required to calculate cost effectiveness, defined as dollars per ton of pollutant reduced, are the emission rate, the control effectiveness, the annualized capital cost of the control equipment, and the operating cost of the control equipment.

With these factors, the cost effectiveness is simply the annualized capital and operating costs divided by the amount of pollution controlled by the technology in 1 year. Estimating and annualizing capital costs are discussed in Chapter 7.

Position Descriptions - Oil and Gas Petroleum Engineers and Reservoir Engineers

Total capital cost includes equipment, engineering, construction, and startup. Annual operating costs include operating, supervisory, and maintenance labor, maintenance materials and parts, reagents, utilities, overhead and administration, property tax, and insurance. In addition to the overall cost effectiveness for a control technology, the incremental cost effectiveness between dominant control options can be presented.

The incremental cost effectiveness is the difference in cost between two dominant technologies divided by the difference in the pollution emission rates after applying these two technologies. An extraordinarily high incremental cost effectiveness can be used as justification that a control technology having only slightly higher removal efficiency than the next technology, but which has significantly higher cost, is not cost effective.

The incremental cost effectiveness approach sometimes can be used to establish the practical limits for technologies with variable removal efficiency. But to achieve The large additional cost for a small improvement in efficiency is not cost effective. The most effective level of control that is not eliminated by the evaluation is proposed by the applicant to be BACT. It is not necessary to continue evaluating technologies that have lower control effectiveness. The permitting agency will review the application for completeness and accuracy, and is responsible for assuring that all pertinent issues are addressed before approving the application.

The ultimate BACT decision is made by the agency after public comment is solicited and public review is held and comments are addressed. To determine if a new industrial source of air pollution can be established in any area, the projected increase in ambient pollutant concentration must be determined using dispersion modeling.

Kundrecensioner

If the proposed source is large and if there is any doubt that either the allowed increment could be exceeded or that the NAAQS could be approached, air permitting agencies may require up to 2 years of ambient air monitoring to establish accurate background concentrations. Additional monitoring may be required after the source starts up to ensure that modeling calculations did not underpredict the effect of the new source. The levels of significance for air quality impacts in Class II areas are listed in Table 3. Also, the available increment is not reduced by projects having an insignificant impact, so future PSD increment analyses can neglect these small projects.

Sometimes a simple, conservative screening model can be used to quickly demonstrate that a new source is insignificant. Instead, any net emissions increase of tons per year of VOC subject to PSD would be required to perform an ambient impact analysis. As a result, it calculates conservatively high ambient pollutant concentrations. A more complex model that uses local meteorological data can be used to calculate the impact of a source on local ambient air quality, with more realistic results than are obtained from a screening model.

The full analysis expands the scope of the dispersion modeling to include existing sources and the secondary emissions from residential, commercial, and industrial growth that accompanies the new activity at the new source or modification. The goals of the reform were to reduce the regulatory burden and to streamline the NSR permitting process. According to the EPA, the proposed revisions would reduce the number and types of activities at a source that would be subject to major NSR review, provide state agencies with more flexibility, encourage the use of pollution prevention and innovative technologies, and address concerns related to permitting sources near Class I areas.

Significant debate delayed adoption of the proposal. A major issue has been the applicability of major NSR to modifications of existing sources. Environmental Protection Agency, Draft, October Leonard, R. Evans, B. Unfortunately meteorological conditions have not cooperated fully, and, thus, the smoke from chimneys does not always rise up and out of the immediate neighborhood of the emission. To overcome this difficulty for large sources where steam is produced, for example, such as power plants and space heating boiler facilities, taller and taller stacks have been built.

These tall stacks do not remove the pollution from the atmosphere, but they do aid in reducing groundlevel concentrations to a value low enough so that harmful or damaging effects are minimized in the vicinity of the source. They can be classified according to the method of formation and according to the height of the base, the thickness, and the intensity. An inversion may be based at the surface or in the upper air. In this situation the ground cools rapidly due to the prevalence of long-wave radiation to the outer atmosphere.

Other heat transfer components are negligible which means the surface of the earth is cooling. The surface air becomes cooler than the air above it, and vertical air flow is halted. In the morning the sun warms the surface of the earth, and the breakup of the inversion is rapid.

Smoke plumes from stacks are quite often trapped in the radiation inversion layer at night and then brought to the ground in a fumigation during morning hours. The result is high ground-level concentration. The result is a transfer of heat downward, cooling the upper air by convection and forming an evaporation inversion. The cooling of the air may be sufficient to produce fog. When a sea breeze occurs from open water to land, an inversion may move inland, and a continuous fumigation may occur during the daytime.

This inversion results from an almost permanent high pressure area centered over the north Pacific Ocean near the city. The axis of this high is inclined in such a way that air reaching the California coast is slowly descending or subsiding. During the subsidence, the air compresses and becomes warmer, forming an upper-air inversion. As the cooler sea breeze blows over the surface, the temperature difference increases, and the inversion is intensified. It might be expected that the sea breeze would break up the inversion but this is not the case. The sea breeze serves only to raise and lower the altitude of the upper air inversion.

The diurnal cycle is highly influenced by radiation from the sun. When the sun appears in the morning, it heats the earth by radiation, and the surface of the earth becomes warmer than the air above it. This causes the air immediately next to the earth to be warmed by convection. The warmer air tends to rise and creates thermal convection currents in the atmosphere. These are the thermals which birds and glider pilots seek out, and which allow them to soar and rise to great altitudes in the sky.

On a clear night, a process occurs which is the reverse of that described above. The ground radiates its heat to the blackness of space, so that the ground cools off faster than the air. Convection heat transfer between the lower air layer and the ground causes the air close to the ground to become cooler than the air above, and a radiation inversion forms. Energy lost by the surface air is only slowly replaced, and a calm may develop. These convection currents set up by the effect of radiant heat from the sun tend to add or subtract from the longer-term mixing turbulence created by the weather fronts.

There are significant diurnal differences in the temperature profiles encountered in a rural atmosphere and those in an urban atmosphere. On a clear sunny day in rural areas, a late afternoon normal but smooth temperature profile with temperatures decreasing with altitude usually develops. As the sun goes down, the ground begins to radiate heat to the outer atmosphere, and a radiation inversion begins to build up near the ground. Finally by late evening, a dog-leg shaped inversion is firmly established and remains until the sun rises in the early morning.

Smoke plumes emitted into the atmosphere under the late evening inversion tend to become trapped. Since vertical mixing is very poor, these plumes remain contained in very well-defined layers and can be readily observed as they meander downwind in what is called a fanning fashion. In the early morning as the inversion breaks up, the top of the thickening normally negative temperature gradient will encounter the bottom edge of the fanning plume.

Since vertical mixing is steadily increasing under this temperature profile, the bottom of the fanning plume suddenly encounters a layer of air in which mixing is relatively good. The plume can then be drawn down to the ground in a fumigation which imposes high ground-level concentrations on the affected countryside. A similar action is encountered in the city. However, in this case, due to the nature of the surfaces and numbers of buildings, the city will hold in the daytime heat, and thus the formation of the inversion is delayed in time.

Furthermore, the urban inversion will form in the upper atmosphere which loses heat to the outer atmosphere faster than it can be supplied from the surfaces of the city. Thus, the evening urban inversion tends to form in a band above the ground, thickening both toward the outer atmosphere and toward the ground.

Smoke plumes can be trapped by this upper air radiation inversion, and high ground concentrations will be found in the early morning urban fumigation. Visible plumes are excellent indicators of stability conditions. Five special models have been observed and classified by the following names: 1. Looping Coning Fanning Fumigation Lofting All of these types of plumes can be seen with the naked eye. A recognition of these conditions is helpful to the modeler and in gaining an additional understanding of dispersion of pollutants. In the section immediately preceding this one, the condition for fanning followed by fumigation has been described.

Lofting occurs under similar conditions to fumigation. However, in this case the plume is trapped above the inversion layer where upward convection is present. Therefore, the plume is lofted upwards with zero ground-level concentration resulting. When the day is very sunny with some wind blowing, radiation from the ground upward is very good.

Strong convection currents moving upward are produced. Under these conditions plumes tend to loop upwards and then down to the ground in what are called looping plumes. When the day is dark with steady relatively strong winds, the temperature profile will be neutral so that the convection currents will be small. Under these conditions the plume will proceed downwind spreading in a cone shape. Under these conditions dispersion should most readily be described by Gaussian models. The ft stacks provided at the first large steam plant constructed by TVA at Johnsonville, TN, in were soon found to be inadequate.

These stacks were then extended to ft in , and TVA stack height has crept upwards ever since. As evidence, the large coal-fired power plant at Cumberland City, TN, has two ft stacks, and the Kingston and Widows Creek Plants which each have a ft stack, topping the former tallest stacks at the Bull Run and Paradise plants by ft.

Ever since structural steel became plentiful and strong enough to carry extreme loads, longer and taller structures have been built. Whatever the reason, it is amusing to compile and contemplate the statistics on tall structures, as listed in Table 4. Table 4. Dispersion models exist which fit into this scheme. For stationary sources three cases are defined: area sources, process stacks, and tall stacks. Replaces ten ft stacks. Original six stacks ft high, raised to ft, then replaced.

Original height ft. Most tall stack sources will be associated with fossil fuel burning steam electric power generating facilities. Another method to identify a tall stack is through the heat emission rate. This quantity should be greater than 20 MW Such stacks produce plumes with great buoyancy, and these plumes have a high plume rise after leaving the stack. Furthermore, the exit velocity is high enough to avoid any building downwash. Most process stacks are not connected to sources with a high furnace heat input.

Thus buoyancy is limited, and plume rise may be smaller. Quite often these stack plumes will have a high velocity, but little density difference, compared to ambient conditions. Thus the plumes might be considered as jets into the atmosphere. Furthermore, since these stacks are usually shorter than ft, the plumes may be severely affected by the buildings and the terrain that surround them. If stack efflux velocity is low, stack downwash may become prominent.

In general, this is the kind of stack that is found in a chemical or a petroleum processing plant. Emissions from such a stack range from the usual mixture of particulates, sulfur oxides, nitrogen oxides, and excess air to pure organic and inorganic gases. To further complicate matters, these emissions usually occur within a complex of multiple point emissions; the result being that single-point source calculations are not valid.

A technique for combining these process complex sources must then be devised. The heart of the matter is to estimate the concentration of a pollutant at a particular receptor point by calculating from some basic information about the source of the pollutant and the meteorological conditions. For a detailed discussion of the models and their use, refer to the texts by Turner1 and Schnelle and Dey. Solutions to the deterministic models have been analytical and numerical, but the complexities of analytical solution are so great that only a few relatively simple cases have been solved.

- Pollution Control Handbook For Oil And Gas Engineering by Nicholas P. Cheremisinoff

Numerical solutions of the more complex situations have been carried out but require a great amount of computer time. Progress appears to be the most likely for the deterministic models. However, for the present, the stochastically based Gaussiantype model is the most useful in modeling for regulatory control of pollutants. Algorithms based on the Gaussian model form the basis of models developed for short averaging times of 24 hours or less and for long-time averages up to a year.

The short-term algorithms require hourly meteorological data, while the long-term algorithms require meteorological data in a frequency distribution form. On a geographical scale, effective algorithms have been devised for distances up to 10 to 20 km for both urban and rural situations. Long-range algorithms are available but are not as effective as those for the shorter distance.

Based on a combination of these conditions, the Gaussian plume model can provide at a receptor either 1. A cumulative frequency distribution of concentration exceeded during a selected time period 4. Primarily the models are used to estimate the atmospheric concentration field in the absence of monitored data. In this case, the model can be a part of an alert system serving to signal when air pollution potential is high, requiring interaction between control agencies and emitters.

The models can serve to locate areas of expected high concentration for correlation with health effects. Real-time models can serve to guide officials in cases of nuclear or industrial accidents or chemical spills. Here the direction of the spreading cloud and areas of critical concentration can be calculated. After an accident, models can be used in a posteriori analysis to initiate control improvements.

The models serve as the heart of the plan for new-source reviews and the prevention of significant deterioration of air quality PSD. Here the models are used to calculate the amount of emission control required to meet ambient air quality standards. The models can be employed in preconstruction evaluation of sites for the location of new industries.

Models have also been used in monitoring-network design and control-technology evaluation. During the summer of , a new system for distribution was put in place. Using these programs, it is possible to predict the ground-level concentrations of a pollutant resulting from a source or a series of multiple sources. These predictions are suitable evidence to submit to states when requesting a permit for new plant construction. Of course, the evidence must show that no ambient air quality standard set by the EPA is exceeded by the predicted concentration.

Briggs plume-rise method and logarithmic wind speed—altitude equations are also used in the algorithms comprising SCRAM. SCRAM requires the source—receptor configuration to be placed in either a rectangular or polar-type grid system. Geological Survey on its detailed land contours maps. This grid is indicated on the maps by blue ticks spaced 1 km apart running both North—South and East—West. Sources and receptors can be located in reference to this grid system and the dispersion axis located from each source in reference to each of the receptor grid points.

The polar grid system is used in a screening model to select worst meteorological conditions. If concentrations under the worst conditions are high enough, a more detailed study is conducted using the rectangular coordinate system. The location of the highest concentration then is determined within m on the rectangular grid. Meteorological data is obtained from on-site measurement, if possible.

If not, data must be used from the nearest weather bureau station. At the weather stations, data is recorded every hour. However, since the center in Asheville only digitizes the data every third hour. Thus air-quality impact analysis studies can employ hourly data for short averaging time studies. However, some of the SCRAM programs have meteorological data preprocessors which take the surface data and daily upper air data from the Asheville center and produce an hourly record of wind speed and direction, temperature, mixing-depth, and stability.

The meteorological data is used in the dispersion programs to calculate hourly averages which are then further averaged to determine 3-hour, 8-hour, etc. Long-term modeling for monthly, seasonal, or annual averages require use of the same data and a special program known as STAR, for Stability Array. It is a steady-state Gaussian plume model. Therefore, the parameters such as meteorological conditions and emission rate are constant throughout the calculation.

The time periods for the short- term program include 1, 2, 3, 4, 6, 8, 12, and 24 h. The ISCST3 program can calculate annual concentration if used with a year of sequential hourly meteorological data. It uses statistical wind summaries and calculates seasonal or annual ground-level concentrations. In both of these programs, the generalized plume-rise equations of Briggs, which are common to most EPA dispersion models, are used.

There are procedures to evaluate effects of aerodynamic wakes and eddies formed by buildings and other structures. A wind-profile law is used to adjust observed wind speed from measurement height to emission height. Procedures from former models are used to account for variations in terrain height over receptor grid.


  • Africa and the Novel.
  • The Bridal Quest (Matchmakers)?
  • Pollution Control Handbook for Oil and Gas Engineering.
  • The Development Effectiveness of Food Aid: Does Tying Matter? (Development Dimension);
  • Bestselling Series.
  • Navigation Bar.
  • Nicholas P Cheremisinoff;

There are one rural and three urban options which vary due to turbulent mixing and classification schemes. For regulatory purposes, if the concentrations predicted by the screening model exceed certain significant values, a more refined model must be employed. SCREEN3 allows a group of sources to be merged into one source, and it can account for elevated terrain, building downwash, and wind speed modifications for turbulence. It can be applied from around ft downwind up to several hundreds of miles.

By accounting for varying dispersion rates with height, refined turbulence based on planetary boundary layer theory, advanced treatment of turbulent mixing, plume height, and terrain effects, AERMOD improves the estimate of downwind dispersion. This model incorporates plume-rise enhancements and the next generation of building downwash effects.

There are also a variety of specialized models for accidental release modeling, roadway modeling, offshore sources, and regional transport modeling. To make the calculation, it is obvious that we must have a well-defined source and that we must know the geographic relation between the source and the receptor. But we must understand the means of transport between the source and the receptor, as well. Thus source—transport—receptor becomes the trilogy which we must quantitatively define in order to make the desired computation.

We need to consider first whether it is mobile or stationary and then whether it is emitted from a point, in a line, or more generally from an area. Then we must determine its chemical and physical properties. The properties can be determined most appropriately by sampling and analysis, when possible. It is then that we turn, for example, to estimation by a mass balance to determine the amount of material lost as pollutant.

The major factors that we need to know about the source are 1. Composition, concentration, and density Velocity of emission Temperature of emission Pressure of emission Diameter of emitting stack or pipe Effective height of emission From these data, we can calculate the flow rate of the total stream and of the pollutant in question.

These factors are the subject of basic meteorology. The way in which atmospheric characteristics affect the concentration of air pollutants after they leave the source can be viewed in three stages: 1. Effective emission height 2. Bulk transport of the pollutants 3. Dispersion of the pollutants 4. The higher the plume goes, the lower will be the resultant groundlevel concentration. The momentum of the gases rising up the chimney initially forces these gases into the atmosphere. This momentum is proportional to the stack gas velocity. However, stack gas velocity cannot sustain the rise of the gases after they leave the chimney and encounter the wind, which eventually will cause the plume to bend over.

Thus mean wind speed is a critical factor in determining plume rise. As the upward plume momentum is spent, further plume rise is dependant upon the plume density. Plumes that are heavier than air will tend to sink, while those with a density less than that of air will continue to rise until the buoyancy effect is spent.

The buoyancy effect in hot plumes is usually the predominate mechanism. When the atmospheric temperature increases with altitude, an inversion is said to exist. Loss of plume buoyancy tends to occur more quickly in an inversion. Thus, the plume may cease to rise at a lower altitude, and be trapped by the inversion. Many formulas have been devised to relate the chimney and the meteorological parameters to the plume rise.

The most commonly used model, credited to Briggs, will be discussed in a later section. The plume rise that is calculated from the model is added to the actual height of the chimney and is termed the effective source height. It is this height that is used in the concentration-prediction model. Specification of the wind speed must be based on data usually taken at weather stations separated by large distances. Since wind velocity and direction are strongly affected by the surface conditions, the nature of the surface, predominant topologic features such as hills and valleys, and the presence of lake, rivers, and buildings, the exact path of pollutant flow is difficult to determine.

Furthermore, wind patterns vary in time, for example, from day to night. The Gaussian concentration model does not take into account wind speed variation with altitude, and only in a few cases are there algorithms to account for the variation in topography. The dispersion of a plume from a continuous elevated source increases with increasing surface roughness and with increasing upward convective air currents.

Thus, a clear summer day produces the best meteorological conditions for dispersion, and a cold winter morning with a strong inversion results in the worst conditions for dispersion. Air quality criteria delineate the effects of air pollution and are scientifically determined dosage—response relationships. These relationships specify the reaction of the receptor or the effects when the receptor is exposed to a particular level of concentration for varying periods of time. Air quality standards are based on air quality criteria and set forth the concentration for a given averaging time.

Regulations have been developed from air quality criteria and standards which set the ambient quality limits. Thus the objective of our calculations will be to determine if an emission will result in ambient concentrations which meet air quality standards that have been set by reference to air quality criteria. Usually, in addition to the receptor, the locus of the point of maximum concentration, or the contour enclosing an area of maximum concentration, and the value of the concentration associated with the locus or contour should be determined.

The short-time averages that are considered in regulations are usually 3 min, 15 min, 1 h, 3 h, or 24 h. Long-time averages are one week, one month, a season, or a year. Turner, D. The terminology implies using test methods to measure concentration on a one-time or snapshot basis, as opposed to continuously monitoring the source as discussed in Chapter 6.

Source testing may be performed to provide design data or to measure performance of a process, or it may be prescribed on a periodic basis to demonstrate compliance with air permit emission limitations. Most source test procedures require labor to set up the test, collect samples, and analyze the results. A list of source test methods that are described in the CFR is provided in Table 5. These procedures have been tested, reviewed, and adopted by the EPA as the reference source test methods for a number of pollutants in a variety of applications.

It would be an inefficient use of public funds for the EPA to sponsor research for test methods that cover unusual operating conditions for a unique process, and it is impossible to predict future processes, conditions, and improvements. Sometimes experience, judgment, and skill are needed to modify the test method to overcome a limitation that arises in a specific application. In such cases, test reports can reference the test method and describe the modification.

This is easy to do for gaseous pollutants, since molecules of gas can be assumed to be evenly distributed throughout the gas stream due to mixing and diffusion. It is highly unlikely that gaseous pollutants will segregate in a moving gas stream. A simple probe can be used to withdraw a sample.

Due care must be used to avoid pulling a sample from a nonrepresentative location, such as just downstream of an injection point. Teflon, stainless steel, and glass sample lines and containers often are used to avoid reactions with pollutants. Details of the tip of a pitot tube are shown in Figure 5. The measured pressure difference is used to calculate velocity.

The key is to position the pitot tube at the correct points in the duct so that the average velocity is determined. This is done by positioning the pitot tube at the centroid of equal-area segments of the duct. Method 1 CFR provides tables for probe positions based on this principle. Figure 5. For small diameter stacks, the pitot tube can reach across the stack to pick up the points on the far side. For large diameter stacks, it is easier to reach no more than half way across the stack, so four sampling ports are provided to allow shorter sampling probes.

The stack cross-section is divided into 12 equal areas with the location of traverse points indicated. Similarly, because each sampling position is representative of a small area of the duct or stack, particulate samples are withdrawn at the same traverse points at which velocity measurements are made.

However, because particles do not necessarily follow the streamlines of gas flow and because gravity can act on particles in a horizontal duct, Method 1 recommends more traverse points for particulate sampling than for a simple velocity traverse. The minimum number of sample points for traverses depends on the proximity of the test port to flow disturbances in the duct, and to a lesser extent, the duct size. The minimum number of sample points for a velocity traverse is illustrated in Figure 5. The minimum number of samples to be taken for a particulate traverse is illustrated in Figure 5.

Meanwhile, particles with sufficient momentum will tend to continue traveling in a straight line, leaving the gas flow streamlines and will not be carried into the sampling probe, as illustrated in Figure 5. This produces a sample that, after measuring the collected gas volume and weighing the collected particulate filter, has an erroneously low particulate concentration.

Similarly, if the sample velocity is too low, excess gas is diverted away from the probe while particles are carried into the probe, as illustrated in Figure 5. The correct isokinetic sample flow rate is determined by conducting a velocity traverse prior to collecting a particulate sample. After the sample is taken and as data are being evaluated, the sample velocity as a percentage of gas velocity is determined and reported as a quality check on the particulate sample. The range of emission concentrations, temperature, and pressures encountered is sometimes many magnitudes greater than found at an ambient air sampling station.

For this reason, sampling and analysis techniques and equipment are different for each case, even though the same general principles may be employed. This chapter deals with both ambient air quality sampling procedures and monitoring and continuous emissions monitoring. Lodge1 presents a good discussion in Methods of Air Sampling and Analysis.

Usually a sample network would be installed that would blanket the area with a series of similar stations. The object would be to measure the amount of gaseous and particulate matter at enough locations to make the data statistically significant. It is not uncommon to find each station in a network equipped with simple, unsophisticated grab sampling devices. However, quite a few sophisticated monitoring networks have been developed which contain continuous monitors with telemetry and computer control.

Meteorological variables are also monitored and correlated with the concentration data. The information is then used: 1. To establish and evaluate control measures 2. To evaluate atmospheric-diffusion model parameters 3. To determine areas and time periods when hazardous levels of pollution exist in the atmosphere 4. For emergency warning systems 6.

1st Edition

Stations are permanent or, at least, long-term installations. An advantage of fixed sampling is that measurements are made concurrently at all sites, providing directly comparable information, which is particularly important in determining relationship of polluting sources to local air quality and in tracing dispersion of pollutants throughout the area. The chief advantage of mobile sampling is that air quality measurements can be made at many sites — far more than would be feasible in a fixed sampling program.

Mobile sampling provides better definition of the geographical variations if the program is long enough to generate meaningful data. Pollutant concentrations are instantaneously displayed on a meter, continuously recorded on a chart, magnetic tape, or disk. Integrated sampling is done with devices that collect a sample over some specified time interval after which the sample is sent to a laboratory for analysis.

The result is a single pollutant concentration that has been integrated, or averaged, over the entire sampling period. This is an older technique and currently in limited use. Continuous or automatic monitoring instruments offer some advantages over integrating samplers; for example, there is a capability for furnishing short-interval data, and there is a rapid availability of data.

Moreover, output of the instruments can be electronically sent to a central point. Also, continuous monitors require less laboratory support. They also may be necessary to monitor some pollutants where no integrating method is available or where it is necessary to collect data over short averaging times, for example, 15 min. Automated monitors also have some drawbacks. They require more sophisticated maintenance and calibration, and the operators and maintenance personnel have to be more highly technically trained.

The selection of a monitoring system is influenced by the averaging time for which concentrations are desired, i. It should be consistent with the averaging times specified by air quality standards.

http://www.cantinesanpancrazio.it/components/fezohaby/140-come-faccio-a.php The integrated sampler defines SO2 levels over a broad area, and the continuous devices provide detailed information on diurnal patterns. The short averaging time of interest for CO and ozone dictates the use of continuous monitors for these pollutants. The selection of the methodology to be used is an important step in the design of the monitoring portion of the assessment study.

The EPA, as well as most of the states, maintains its own surveillance networks. The ideal objective when installing a monitoring network is to be able to obtain continuous real time data. Table 6. Only 3 of these standard methods employ continuous or semicontinuous monitors. No satisfactory device exists as yet for determining suspended particulate on a continuous basis. However, Table 6. TABLE 6. The following is a list of those methods with reference to CFR It is possible to assemble such a system from the hardware components that are now available. The major drawback of this automatic system is the limitations of the computer software; there is little economic information available for formulating the ambient air quality and optimizing models.

Figure 6. Many automated environmental surveillance systems employing continuous monitors exist in the U. None are quite as sophisticated as would be implied by the system of Figure 6. The sample is retained in the collection equipment which is then removed for the sample train. Further processing takes place to prepare the sample for analysis. Most of the analysis techniques are standard procedures involving one or more of the following methods: 1.

Gravimetric Volumetric Microscopy Instrumental a. Spectrophotometric i. Ultraviolet ii. Visible Colorimetry iii. Infra-red b. Electrical i. Conductometric ii. Coulometric iii. Titrimetric c. Emission Spectroscopy d. Mass Spectroscopy e. Chromatography 6. They remain suspended in the atmosphere for long periods of time and absorb, reflect, and scatter the sunlight, obscuring visibility.

When breathed, they penetrate deeply into the lungs. They also cause economic loss because of their soiling and corrosive properties. The new EPA ambient particulate-matter definition includes only the part of the size distribution that could penetrate into the human thorax. There is a two-stage selective inlet. Air is drawn into the inlet and deflected downward into acceleration jets of the first-stage fractionator.

Larger, non-inhalable particles are removed. Air then flows through the first stage vent tubes and then through the second fractionation stage. This standard went into effect on July 31, Title IV, which is to ensure compliance with the Acid Rain Program, sets out provisions for CEM in the two-phase utility power industry control strategy. This should spawn CEM techniques optimized for the chemical compound being monitored. Title V will require CEM for compliance assurance. The collection of real-time emission data will be the first step to attaining the national mandated reduction in SO2 and NOx emissions.

Furthermore, CEM can be used to track use of allowances in the new market-based SO2 emissions trading program. CEM is carried out by two general methods — in situ and extractive. Each of the methods measures on a volumetric basis, ppm for example. You may use browse techniques or search the data base by HAP or analyzer type. The data base is found on the EPA home page at: www.

The device contains: 1. Primary air-moving device, usually a vacuum pump, to pull the air sample through the instrument 2. Flow-control and -monitoring device, usually a constant pressure regulator and rotameter 3. Pollutant detection by various primary sensing techniques 4. Automatic reagent addition where needed 5. Electronic circuitry for transducing the primary signal to a signal suitable for recording and telemetering 6. Provisions for automatic calibration, usually several solenoid valves which can be operated remotely to connect the inlet gas to a scrubbing train for removal of all pollutants and establishing a chemical zero, or, alternatively, to one or more span gases for setting the chemical range of the instruments.

Many monitors of the general type described above have been developed for all of the federally regulated gaseous pollutants and others as well. The remainder of this section will provide details of these devices. This list is not exhaustive. Furthermore, although the devices described below are indicated for a particular pollutant, they can be used for other types of pollutants as well. This monitor was built by a Dr. Thomas to monitor SO2 in a greenhouse during a study of the effects of SO2 on plants.

In an electroconductivity apparatus, a reagent passes through a reference conductivity cell and then into an absorbing column. Air is drawn by a vacuum pump counter-currently to the reagent flow through the absorbing tube, then through a separator to the exhaust. The SO2 is absorbed in the reagent which then passes through a measuring conductivity cell. A stabilized AC voltage is impressed across the conductivity cells resulting in a current flow that is directly proportional to the conductivity of the solution. The value of this current is measured by connecting a resistor in series with the cell to obtain an AC voltage which is proportional to the current.

This voltage is then rectified to direct current. The DC signals from the rectifiers are connected in opposition, thus resulting in a voltage that induces a current through a meter which is directly related to the difference in conductivity between the two solutions. To set the zero on the instrument, any SO2 is removed by passing the air through a soda-lime absorber. The conductivity in both cells should then be the same, and the meter output should be zero.

The SO2 is oxidized in a reagent such as deionized water to form the sulphate ion which will cause a conductometric change related to amount of SO2 present. The method is basically simple. However, since conductivity is temperature dependent, the analyzer section of any instrument must be thermostated. One is a reference electrode. An air sample is drawn through a detector cell which contains a buffered solution of KI.

Unreacted iodine is reduced to iodide, I—, at the cathode. The difference is proportional to the SO2 concentration through a Faradaic expression. A selective scrubber removes interferents such as O3, mercaptans, and H2S, improving specificity of instrument. A zero gas is provided by using an activated carbon filter to remove impurities in air, including SO2. The gas flow is regulated by pressure control and a capillary tube which provides a constant pressure and pressure drop, and thus a constant flow.

A pulsation damper adds volume to the system to provide stability of flow. The water supply is constantly replenished when evaporation occurs. The nondispersive instruments do not employ spectral separation of the radiation but make use of the specific radiation absorption of heteroatomic gases in the infrared range between 2. The total absorption in the range is measured by an alternating light photometer with two parallel beams and a selective radiation receiver.

The sample gas is passed through the sample cell which is arranged parallel to a reference cell containing a gas which absorbs no radiation. The radiation emitted by two nickel-chrome filaments reaches the two receiving cells after passing through the sample cell and reference cell, respectively. The receiving cells are filled with the gas component to be measured CO in this case and are separated by a metal diaphragm.

The incident radiation is absorbed selectively in the specific absorption bands. Every gas has an absorption spectrum consisting of one or two individual absorption bands which are specific for the gas. Carbon monoxide has a band from 4. The absorbed energy is transformed into thermal energy. Any difference in absorbed energy produces a temperature and pressure difference between the two receiving cells. This pressure difference deflects the diaphragm and thus changes the capacitance of the diaphragm capacitor.

The diaphragm capacitor is connected to a high-impedance resistor which generates an alternating millivolt voltage that can be amplified, rectified, and displayed by a recorder. One major difference among the analyzers is the length of the measuring cell. The length of the cell has little effect on the measuring range of the analyzer, and both analyzers are capable of being changed to provide several measuring ranges.

The beams are blocked simultaneously ten times per second by the chopper, a two-segmented blade rotating at five revolutions per second. In the unblocked condition, each beam passes through the associated cell and into the detector. The sample cell is a flow-through tube that receives a continuous stream of sample. The reference cell is a sealed tube filled with a reference gas.

The reference gas is selected for minimal absorption of infrared energy of those wavelengths absorbed by the sample component of interest. The detector consists of two sealed compartments separated by a flexible metal diaphragm. Each compartment has an infrared transmitting window, to permit entry of the corresponding energy beam. Both chambers are filled, to the same subatmospheric pressure, with the vapor of the component of interest.

Use of this substance as the gas charge in the detector causes the instrument to respond only to that portion of net difference in energy due to the presence of the measured component. In operation, the presence of the infrared-absorbing component of interest in the sample stream causes a difference in energy levels between the sample and reference sides of the system. This differential energy increment undergoes the following sequence of transformation: a. Radiant energy: In the sample cell, part of the original energy of the sample beam is absorbed by the component of interest.

In the reference cell, however, absorption of energy from the reference beam is negligible. Temperature: Inside the detector, each beam heats the gas in the corresponding chamber. However, since energy of the reference beam is greater, gas in the reference chamber is heated more. Pressure: Higher temperature of gas in the reference chamber raises the pressure of this compartment above that of the sample chamber.

Mechanical energy: Gas pressure in the reference chamber distends the diaphragm toward the sample chamber. The energy increment is thus expended in flexing the diaphragm. Capacitance: The diaphragm and the adjacent stationary metal button constitute a two-plate variable capacitor. Distention of the diaphragm away from the button decreases the capacitance. When the chopper blocks the beams, pressures in the two chambers equalize, and the diaphragm returns to the undistended condition. As the chopper alternately blocks and unblocks the beams, therefore, the diaphragm pulses, thus changing detector capacitance cyclically.

The detector is part of an amplitude modulation circuit that impresses the 10 Hz information signal on a 10 MHz carrier wave provided by a crystal-controlled radio-frequency oscillator. Additional electronic circuitry in the oscillator unit demodulates and filters the resultant signal, yielding a 10 Hz signal. This produces a specificity ratio of between 10, and 30, to 1 for sulfur compounds.

A photomultiplier tube detects the emission. For the usual ambient conditions, the use of a chromatographic column is not warranted since the sulfur in ambient air is usually in the form of SO2. Carbon atoms produce ions in a hydrogen flame. Thus, the air stream containing hydrocarbons is fed into a hydrogen flame. The ions produced are detected by an electrometer.

The hydrocarbon concentration is proportional to the current. It has been found that CH4 can be separated by gas chromatographic technique from other hydrocarbons. An employing flame ionization detection has been constructed. Carbon monoxide is detected in the device by catalytically converting the CO to CH4 over a nickel catalyst after it has been separated from the rest of the gases by the gas chromatographic technique. The optimum excitation takes place in the narrower range between and nm.

An ultraviolet light such as a quartz deuterium lamp is used to produce the source of radiation.