Thursday, October 31, 2019

Juvenile courts Essay Example | Topics and Well Written Essays - 500 words

Juvenile courts - Essay Example hat, by creating a separate court system for juvenile offenders, these young offenders will not be punished in a way that will have effect on their current behavior. While he agrees that young offenders do not require the harsher punishments seen within the regular court system, he believes that treatment will still be dismal if the two courts were separated. In regards to changing the current state of the juvenile court into involving social welfare to the offenders, Feld is against this because, in previous attempts, there were odds between how the offenders were treated and punished. As nothing was being done as it should have been, Feld believes that there is simply no point in having a juvenile court. Feld would like to abolish the juvenile courts altogether because he has not seen anything to suggest that they are doing any good. He thinks that there is nothing wrong with juveniles being tried in the typical court, just as long as they are given proper punishments that reflect how young they are (as well as the crime that they committed). The "cushions" that Feld recommends for juveniles being adjudicated in adult courts involve the young offenders avoiding punishments that are meant for adult offenders; this is a fear that he has with keeping them in the typical court, yet he believes things could be worse if there were separate courts. These "cushions" include the creation of waivers for specific offenses - these waivers would allow offenders to go free without imprisonment or a lengthy punishment, but perhaps community service or something similar to that effect. Another "cushion" allows that young offenders will have a maximum punishment, so that they are never given the sam e punishment as their adult counterparts. These "cushions" can prove to be helpful as they do take into consideration that young offenders do not deserve the same punishments as people older than them, and those that might have committed more serious offenses. They can prove to be

Tuesday, October 29, 2019

Morality as Anti-Nature Essay Example for Free

Morality as Anti-Nature Essay Friedrich Nietzsche stands as one of the philosophers who tackled about the complexities of human existence and its condition. It is noteworthy to state that most of his works made several standpoints to what he refers to as the Ubermensch. The conception of such is designed to inspire the individual to substantiate his existence and rouse his self-overcoming and affirmative character. This can be said to arise from the idea of creating a self through the process of undergoing a destructive condition that enables the self to acquire greater power in relation to others. The development of such a self is dependent upon the recognition of the anti-naturalistic character of morality which he discusses in The Twilight of the Idols in the section entitled â€Å"Morality as Anti-Nature†. Within the aforementioned text, Nietzsche argues that morality hinders the individual from experiencing life as it limits an individual’s freewill thereby in the process leading to the creation of an individual who is incapable of life itself. He states, morality is a â€Å"revolt against life† (2006, p. 467). It is a revolt against life as it is based on the negation of an individual’s basic instinct to act freely in accordance to his passions. According to Nietzsche, this is evident in the case of Christian morality which places emphasis on the control of the passions. Within Christian morality, an individual who is incapable of controlling his passions is considered to be immoral as he is incapable of practicing restraint upon himself. Examples of this are evident if one considers that within Christian morality, to be saintly requires restraining one’s desires and hence one can only follow the path of Christ if one denies all of his desires, the denial of which involves the denial of all worldly things. He states, within the context of this morality â€Å"disciplining†¦has put the emphasis throughout the ages on eradication†¦but attacking the passions at the root means attacking life at the root: the practice of the church is inimical to life (Nietzsche, 2006, p. 66). The practice of the church, its imposition of morality contradicts the essence of life which is the actualization of an individual’s self since it delimits an individual to one particular kind of existence. For example, Christian morality has the Ten Commandments. If an individual follows these commandments, the individual’s spiritual life is ensured in the afterworld. Nietzsche argues that by following these commandments, the individual is at once delimited to one particular form of existence. This does not necessarily mean that Nietzsche applauds acts of murder; he is merely stating that by following moral rules and moral norms the individual is at once preventing himself from the experiencing a particular form of life and hence the actuality of life itself. It is important to note that by presenting a criticism of Christian moral values and moral values in general, Nietzsche does not necessarily prescribe an individual to follow his moral code. In fact one might state that Nietzsche does not possess a moral code. He states, Whenever we speak of values, we speak under the inspiration†¦of life: life forces us to establish values; life itself evaluates through us when we posit values†¦It follows from this that even that anti-nature of a morality which conceives God as the antithesis and condemnation of life is merely a value judgment on the part of life. (Nietzsche, 2006, p. 467) Within this context, Nietzsche recognizes that the anti-nature of morality is a value in itself. It differs however from a moral code since it does not delimit an individual by prescribing actions which he ought and ought not to follow. The importance of the anti-nature of morality lies in its emphasis on the affirmation of the individual. Within the text, Nietzsche claims, â€Å"morality in so far as it condemns†¦is a specific error†¦We seek our honour in being affirmative† (2006, p. 468). It is within this context that one may understand why for Nietzsche; the Ubermensch is an individual whose choices are dependent upon the ends justifying the means since to state that one performs a particular action since the means justifies the end is equivalent to performing a particular action since the act itself adheres to what a particular moral rule considers to be ‘good’. This is evident if one considers that in order for an individual S to consider Q a ‘good’ act wherein Q is good due to P and Q necessarily follows from P, it is necessary for P to be good within the context of a moral norm M. For example, a person may consider giving alms to the poor good since the act of giving alms itself is considered ‘good’ within the context of a particular moral norm. As opposed to the example mentioned above, the Ubermensch acts in accordance to what may be achieved by an act [the end of the act itself] since what the Ubermensch places emphasis on is the joy that may be achieved in the act itself. Alex MacIntyre states, â€Å"joy in the actual and active of every kind constitutes the fundamental end from which Nietzsche develops his critique of morality† (1999, p. 6). Although Nietzsche’s criticism of morality and its constraints upon an individual are valid, it is still impossible to conceive of a world wherein no morality is applied. Within the context of social reality, moral norms function to ensure order within society. Although laws may function by themselves to ensure the order of society, laws themselves are dependent upon a particular moral norm which the society adheres to. References McIntyre, A. (1997). The Sovereignty of Joy: Nietzsche’s Vision of Grand Politics. Toronto: U of Toronto P. Nietzsche, F. (2006). Morality as Anti-Nature. The Nietzsche Reader. Eds. K. Ansell-Pearson D. Large. London: Wiley-Blackwell.

Sunday, October 27, 2019

Environmental Impact of Fossil Fuels

Environmental Impact of Fossil Fuels The pollution of large combustion plants comes from fossil fuel such as: coal, oil (petroleum) and natural gas Fossil fuel: have been formed from the remains of organisms which lived hundred of millions of years a go. There are three main types of fossil fuel: Coil, oil (petroleum) and natural gas. Coil was formed from the remains of tree and plants which grew in swamps. Oil it is formed from the sea, the sea contains many tiny animals and plants called plankton, they get their energy to live and multiply from sunlight. When they die they sink to the bottom of the sea. Those that died millions of years ago, form oil and gas which are the main sources of fuel. Natural gas is mainly made up of methane, which is given off by anaerobic bacteria breaking down some of the organic matter which formed oil and coal. Fossil fuels are burned to producing energy, Pollution is defined as the contamination of air, water or soil by materials that interfere with human health, the quality of life, or the natural functioning of ecosystems. Air pollution is the pollution of the atmosphere by emissions from industrial plants, incinerators, internal combustion engines and other sources. Pollutants can be classified as either primary or secondary. Primary pollutants are substances directly produced by a process, such as ash from a volcanic eruption or the carbon monoxide gas from a motor vehicle exhaust. Secondary pollutants are not emitted. Rather, they form in the air when primary pollutants react or interact. An important example of a secondary pollutant is ozone-one of the many secondary pollutants that make up photochemical smog. (Pepper, I.L C.P Gerba M. L Brusseau. 1996) Source Large combustion Plants refers to the coal power station, Oil refinery, natural gas processing plant and others Coal power plant Coal is composed of carbon, sulphur, hydrogen, oxygen and nitrogen. In a coal power station pollutants are formed by the burning of the fossil fuel coal. Burning coal at high temperature will produce oxides of nitrogen. Inside the coal are compounds of sulphur and nitrogen. These originate from the dead organisms that make up the coal. When the coal is burnt the Sulphur and Nitrogen is oxidised producing SOX and NOX, which are released into the atmosphere as primary pollutants. The NOX produced from combusting the Nitrogen in the coal is called fuel NOX. There is also NOX produced by the combustion of Oxygen and Nitrogen in the air. This is known as thermal NOX.  (Peirce, J.F R.F.Weiner P.A. Vesilind.1998) When a fuel burns, it reacts with oxygen to form oxides.   If the fuel burns completely, then all the carbon in it is turned into carbon dioxide which is slightly acidic.   If there is not much air available, the carbon may be turned into carbon monoxide, which is a very poisonous gas. The main primary pollutants created by a coal fired power station are NOx, SOx and VOCs. Sulphur oxides are created from the burning of the coal. Coal naturally contains sulphur, the amount of which varies depending on which organisms created the coal. When the coal is burnt, so also is the sulphur. When a fuel burns, it reacts with oxygen to form oxides.  If the fuel burns completely, then all the carbon in it is turned into carbon dioxide which is slightly acidic.   If there is not much air available, the carbon may be turned into carbon monoxide, which is a very poisonous gas. The carbon dioxide released by the coal power plant causes climate change and global warming, coal fire power plants are the main contributor to co2 in the air. Proteins in living organisms contain nitrogen. When coal burns, Nox is formed in the following ways: When nitrogen bound in the coal is released and combines with oxygen to form fuel Nox.   Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  When high combustion temperatures break apart stable nitrogen molecules in the air which then recombine with oxygen to form thermal NOx.    Primary pollutants formed in a coal fired power plants are: Nox formed at high temperature and pressure of the combustion causes the atmospheric nitrogen and oxygen to react. VOCs (Volatile Organic Compounds) produced when unburnt hydrocarbons are released through the chimney of the furnace. Carbon monoxide is a gas formed as a by-product during the incomplete combustion of all fossil fuels. Exposure to carbon monoxide can cause headaches and place additional stress on people with heart disease.   Sulphur dioxide mostly comes from the burning of coal or oil in power plants. Sulphur dioxide reacts in the atmosphere to form acid rain and particles.  And is also a major contributor to photochemical smog. Nitrogen oxides and sulphur oxides are important constituents of acid rain. These gases combine with water vapour in clouds to form sulphuric and nitric acids, which become part of rain and snow. As the acids accumulate, lakes and rivers become too acidic for plant and animal life. (Peirce, J.F R.F.Weiner P.A. Vesilind.1998)             Impact Coal Fired Power Stations Coal-fired power stations are major sources of pollution. The extensive use of coal is because there is a lot of it around. Although it produces pollutants coal is an important fuel for some considerable time to come.   A coal-fired power station has three main inputs: coal, cooling water pure water to use in steam turbines. The main outputs are electricity, waste heat, CO2, SOx, NOx and ash. Fossil fuels are also linked to the decrease of air quality. Clean air is essential to life and good health. Several important pollutants are produced by fossil fuel combustion: carbon monoxide, nitrogen oxides, sulphur oxides, and hydrocarbons. In addition, total suspended particulates contribute to air pollution, and nitrogen oxides and hydrocarbons can combine in the atmosphere to form tropospheric ozone, the major constituent of smog. Coal-fired power stations are responsible for the diffusion of greenhouse gases such as carbon dioxide. The amount of carbon dioxide in the atmosphere must be carefully balanced to maintain the greenhouse effect, which is what keeps the surface of the earth warm enough to support life. Like all things in nature a change in one part of the environment can result in changes in another.    The effect of increased greenhouse gases in the environment is that the temperature of the atmosphere is expected to increase. It is predicted by some scientists that this temperature increase could result in the following: The destruction of ecosystems such as the Great Barrier Reef. A change in the worlds weather patterns, resulting in an increase in both intensity and frequency of storms, cyclones, floods and droughts. The melting of glaciers and polar ice. Rising sea levels resulting in the permanent flooding of vast areas. Economies may be affected by the destruction of crops and industry.   The effect of releasing gaseous acids into the atmosphere, as a result of modern lifestyles, results in Acid Rain and more serious Global Warming. The effects of global warming is of such great concern that many nations agreed to reduce greenhouse gas emissions.   Mining the coal that is to be used in the generation of electricity results in the destruction of the environment. Water systems can be threatened from the run-off as a result of the washing of coal. The pollution that is caused by the means of acid rain can have a variety of effects on the environment that are mostly negative such as; acid rain being a form of chemical weathering on buildings that are constructed from limestone or marble.   Acid rain can also contaminate water supplies by dissolving the lead and copper pipes which transport the water to houses and other buildings. Another effect of acid rain is the pollution that is caused on lakes and reservoirs killing most of the wildlife, this includes trees plants and animal habitats    Acid rain also affects rivers and lakes, as the acidity levels go up, the pH level falls. With the pH of water below 4.5 most fish will die, this will have a detrimental effect on wildlife as if the fish die the birds that feed on the fish will also die. SOx emissions All living organisms contain compounds of sulphur which are the origin of the sulphur found in coal. When coal burns, the sulphur compounds are converted to oxides of sulphur. Sulphur Dioxide exposure can affect people who suffer from asthma or emphysema by making it more difficult to breathe. It can also irritate peoples eyes, noses, and throats. Sulphur dioxide can harm trees and crops, damage buildings, and make it harder for people to see long distance. NOx Emissions The flue gases in the power station contain oxides of nitrogen (NOx). This is because fuels contain compounds of nitrogen formed from the proteins contained in organisms. When the fuel is burnt, these nitrogen compounds are oxidised to form fuel NOx .At the high temperature of combustion, atmospheric nitrogen and oxygen combine to form thermal   NOx High levels of nitrogen dioxide exposure can give people cough and can make them fell short of breath. People who are exposed to nitrogen dioxide for long time have a higher chance of getting respiration infection. Acid rain can hurt plants and animals, and can make lakes dangerous to swim or fish in Nitrogen dioxide also reacts with the oxygen or hydrocarbons in the presence of sunlight to form an irritating photochemical. Carbon monoxide carbon monoxide makes it hard for body parts to get the oxygen they need to run correctly. Exposure to carbon monoxide makes people fell dizzy and tired and gives them headaches. Ozone near the ground can cause a number of health problems. Ozone can lead to more frequent asthma attacks in people who have asthma and can cause sore throats, cough breathing difficult. It may even lead to premature death. Ozone can also hurt plants and crops. When the ozone in the stratosphere is destroyed, people are exposed to more radiation from the sun (ultraviolet radiation). This can lead to skin cancer and eye problems. Higher ultraviolet radiation can also harm plants and animals    Volatile Organic Compounds (VOCs): causes eye irritation, respiratory irritation, some are carcinogenic, and decreased visibility due to blue-brown haze Advantages Very large amounts of electricity can be generated in one place using coal, fairly cheaply. Transporting oil and gas to the power stations is easy. Gas-fired power stations are very efficient. A fossil-fuelled power station can be built almost anywhere, so long as you can get large quantities of fuel to it. Didcot power station, in Oxfordshire, has its own rail link to supply the coal. Disadvantages Coal is not a renewable resource. Coal-fire power stations create pollution. Mining coal damages the environment. During the production of electricity carbon dioxide is released, increasing the amount of greenhouse gases in the atmosphere. The main drawback of fossil fuel is pollution. Burning any fossil fuel produces carbon dioxide, which contributes to the greenhouse effect warming the Earth. Burning coal produces more carbon dioxide than burning oil or gas. It also produces sulphur dioxide, a gas that contributes to acid rain. this can be reduced before releasing the waste gases into atmosphere. Mining coal can be difficult and dangerous. Strip mining destroy large areas of the landscape. Coal-power stations need huge amounts of fuel, which means train-loads of coal almost constantly. In order to cope with changing demands for power, the station needs reserves. This means covering a large area of countryside next to the power station with piles of coal Sulphur dioxide, nitrogen oxide and nitrogen dioxide are also produced in these emissions and can produce acid rain. (Peirce, J.F R.F.Weiner P.A. Vesilind.1998) Monitoring Pollution Pollution is measured to ensure that the air quality are not exceeded Monitoring air pollution When monitoring air pollution it is important to know or decide what pollutants are to be monitored, where they should be monitored, what instruments are to be used for that purpose and what kind of weather base needs to be collected, and it is also important to figure out how many stations are necessary to meet this goal. Carbon monoxide is typically measured by using an infrared gas analyzer. With this instrument the absorption of infrared radiation by carbon monoxide in the sample air stream is compared with absorption in a reference gas of known carbon monoxide concentration. This method allows continuous non-destructive measurement of carbon monoxide in the sampled air Sulphur dioxide is generally measured by ultraviolet emission spectrometers. This approach is based on the principle that sulphur dioxide emits a measurable flux of radiation when irradiated with intense UV from a light source in the spectrometer. Nitrogen oxides are measured by chemiluminescence. Tow sequential chemical reactions involving ozone are used. First, NO is measured, then NO2. Infrared radiation is emitted during oxidation of NO to NO2 by ozone introduced into the instrument. The amount of radiation produced is proportional to the NO concentration in the air stream. To measure NO2, a catalyst is used to reduce all NO2 in the air stream to NO, whose subsequence reaction with ozone permits the indirect determination of NO2 Ozone concentration are generally measured by using a UV absorption spectrophotometer, although chemiluminescent-type instruments are also used. Various no-methane hydrocarbons are measured using such instruments as a gas chromatograph. Hydrocarbons are generally more difficult to measure than most other pollutants, and often require greater operator involvement in the measurement process (Pepper, I.L C.P Gerba M. L Brusseau. 1996) Monitoring of air quality has been undertaken by scientists for several years. The air taken into the sampler is drawn first through a white filter paper, on which any smoke present leaves a deposit as a stain a; greater or lesser blackness. It then passes through a reagent solution which traps any sulphur dioxide present and converts it to sulphuric acid. After a weeks sampling the seven sets of smoke stain and reagent bottle are brought to the laboratory for analysis. The smoke is determined by measuring instrumentally the loss of reflectance of the once-white filter papers, the reflectance values being convened into equivalent smoke concentrations from a standard calibration graph. The sulphur dioxide is measured by careful titration of the very weak acid solutions, followed by calculation of the results. They also monitor air quality throughout the district, using diffusion tubes and the air quality monitoring station. Contaminated land is a key project for the service, with the production of a contaminated land strategy. Monitoring gaseous emissions from soil and land fill Soils play an important role in controlling back ground concentrations of most air pollutants. Soil can either emit or take up from the atmosphere many trace gases, including NOx, N2O, CO2, and CH4. In general there are three different approaches to measure gas fluxes between soil, and the atmosphere and these are: Chamber approaches, micrometeorological approaches, and soil profile approaches. Monitoring of tropospheric pollutants    When monitoring tropospheric pollutants, an important step is to know which pollutants are present in the troposphere and how their concentrations vary. Chemists monitor the concentrations of tropospheric pollutants, to study patterns and learn about the rate at which certain reactions will take place in certain conditions    Studying individual reactions in the laboratory   To make predictions about pollution, chemists need to know what reactions take place and how quickly they occur. Many of these reactions involve broken down fragments of molecules called radicals. Reactions with radicals happen very quickly but other reactions happen very slowly. Chemists measure the length of time of these reactions to predict the rate at which a reaction will proceed for any set of conditions.    Modelling Studies      The information on rates of reactions is used in computer simulation studies to reproduce and predict the behaviour of pollutants during a smog episode. The more accurate the information used, the more closely the model simulates the observed behaviour.    Smog Chamber Solutions   These are laboratory experiments on a large scale. Primary pollutants are mixed in a huge clear plastic bag called a smog chamber and exposed to sunlight under carefully controlled conditions. Probes monitor the concentrations of various species as the photochemical smog builds up. The chamber has to be big to minimise any surface effects where the reactions take place on the walls of the container instead of the gas phase. Chemists monitor pollutants to find out exactly what pollutants are involved in smog formation, and how they vary in concentration. These changes in concentration can show changes in the atmosphere, for example the presence of sunlight.    Chemists study reactions to see which pollutants react with which. Most importantly, to see which radicals are formed where, because they are very reactive, and cause a lot of atmospheric reactions. The speed of these reactions needs to be measured to understand how fast substances are being made and destroyed.    Chemists can makes models of situations, to predict what will happen in the future. One such model is smog chamber simulations. These are huge plastic bags which are exposed to sunlight under controlled conditions. Analytical probes monitor the concentrations of different gases as the photochemical smog forms. Control One of the Methods for controlling air pollution include removing the hazardous material before it is used, removing the pollutant after it is formed, or altering the process so that the pollutant is not formed or occurs only at very low levels. Industrially emitted particulates may be trapped in cyclones, electrostatic precipitators, and filters. Pollutant gases can be collected in liquids or on solids, or incinerated into harmless substances.   The best way to control pollution is to control level of carbon emissions released into the atmosphere Using coal with low sulphur content    Power plants can use coal with low sulphur content.   As a result, less sulphur dioxide will be produced and the amount of sulphur dioxide in the flue gas will be significantly reduced.    Install scrubbers in power plants   Ãƒâ€šÃ‚  Power plants can install scrubbers to reduce the amount of sulphur dioxide in the flue gas.   The principle of how scrubbers can remove sulphur dioxide are given below:       Dry Scrubber   Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚   Calcium oxide reacts with sulphur dioxide in the flue gas, forming insoluble calcium sulphite which is then filtered out in the flue gas.   Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚     Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚   CaO(s) + SO2(g)   CaSO3(s)       Wet Scrubber   Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚   Calcium oxide is first allowed to react with water, forming calcium hydroxide.   Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚   CaO(s) + H2O(1)   Ca(OH)2(aq)   Calcium hydroxide then reacts with sulphur dioxide in the flue gas, forming water and calcium sulphite.   Calcium sulphite is then filtered out.      Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚   Ca(OH)2(aq) + SO2(g)   CaSO3(s) + H2O(1) (  Barret.R and F. Feates. 1994) Install electrostatic precipitator in power plants: Power plants can install electrostatic precipitator to reduce the amount of particulates in the flue gas.   Flue gas passes through the electrostatic precipitator.   The particulates in the flue gas are attracted by the electric field and then removed from the electrode.    Control the temperature in the combustion chamber: The amount of nitrogen oxides released can be reduced by reducing flame temperature and availability of oxygen in the combustion zone.   But the flame temperature cannot be too low, which would cause incomplete combustion and produce carbon monoxide.    The limestone process The other main way of reducing SOx emissions is to react them with calcium carbonate to produce gypsum for the building trade. This is a hassle as it has to compete with other brands, and be marketed. NOx emissions Coal fired power stations used to get the flames as hot as possible to increase the yield, but as the rate of reaction increases as temperature increases, the amount of thermal NOx (produced from the nitrogen and oxygen combining), increases to get the flames hottest the coal was powdered and mixed with an excess of air. Low NOx burners There is this type of burner, where the injection of air is controlled, so the flames are not as hot. This significantly lowers the production of NOx. Gas reburns The injection of ethane and methane (natural gas) reacts with NOx to produce nitrogen, carbon dioxide and water vapour. CH4 (g) + 4NO2 (g) à   2N2 (g) + CO2 (g) + 2H2O(g) Some of the alkanes will not react, and carbon monoxide is produced as a result of incomplete combustion. CH4 à   CO + H2O The alkanes and CO are then reacted with air to combust them completely. This oxidation is exothermic and so produces heat that contributes to the generation of electricity. (Barret.R and F. Feates. 1994) Conclusion Fossil fuels, like coal, oil, and natural gas, provide the energy that powers our lifestyles and our economy. One of the main uses of fossil fuels is: to generate electricity, fuel cars, and to heat or cool buildings Fossil fuel is one of humanitys most important sources of energy. Fossil fuel plays a major role in our economy and many of our current technology have been developed with fossil fuel in mind. However burning fossil fuel is damaging the Earths environment with the release of pollution to the atmosphere. In addition ecosystems are becoming damaged by the extraction of fossil fuel. Fossil fuels impact the environment greatly; carbon dioxide emissions contribute to harmful global warming and climate change. Inefficient burning of fossil fuels results in the production of carbon monoxide, which is a very harmful and poisonous gas.   Inhalation of this gas is likely to cause death as it interferes with the transport of oxygen in the blood stream Combustion of fossil fuels such as coal, oil and natural gas produces gases such as nitrogen oxides, which cause acid rain.

Friday, October 25, 2019

Nicaraguan Politics and Government Essay -- Essays on Politics

Nicaraguan Politics and Government On the narrow isthmus known as Central America, between the world’s two greatest oceans, Nicaragua has been marked by endless years of political turmoil, social tension and economic dismay. The turmoil’s that have shaken the country make it plausible to believe that by some metaphysical law, Nicaraguan politics have accommodated to nature’s tantrums. Like its diverse, rugged and seismically active geology, the country’s politics have been irregular, impulsive and often explosive (Pastor, 15). The Nicaraguan election of February 25, 1990 represents the country’s attempt to break from its turbulent political past and pursue economic and political stability through the establishment of a democracy. The country’s elections marks a zenith for world democracy, in that no country’s elections had ever been witnessed by more international observers from more diverse groups than was Nicaragua's. The election was closely monitored by myriads of international observers including members of the Organization of American States, United Nations as well as members of the Carter Center including its founder, ex-US President Jimmy Carter. That Sunday morning, beginning at 6 A.M. about one and half million Nicaraguans- about 86 percent of eligible voters- went to cast their vote in one of over four thousand polling sites throughout the country; the outcome of this election marks a decisive point in the country’s history. The results will determine the people’s willingness to either continue with the rule of Daniel Ortega and the Sandinista party that had been in power for over ten years and established a socialist government; or to break away from the misery and persecution of the regime and establish a free, ... ...ntinuous effort to enact policies that will be beneficial to the Nicaraguan people and country as a whole. Work Cited Baumeister, Eduardo. Estructura y Reforma Agraria en Nicaragua. Managua: Editorial Ciencias Sociales, 1998. Close, David. Nicaragua: The Chamorro Years. London: Lynne Rienner, 1999. Leiken, Robert S. Why Nicaragua Vanquished. Oxford: Rowman & Littlefield, Inc., 1992. Morley, Morris H. Washington, Somoza, and the Sandinistas. New York: Cambridge UP, 1994. Pastor, Robert A. Not Condemned to Repetition. Cambridge: Westview P, 2002. Plan Nacional de Desarollo. Gobierno de Nicaragua. 15 May 2005 . Stone, Samuel Z. The Heritage of the Conquistadors. Lincoln: University of Nebraska P, 1990. Walker, Thomas W. Reagan Versus the Sandinistas: The Undeclared War on Nicaragua. Boulder: Westview P, 1987.

Thursday, October 24, 2019

Chinese and Greek Mythology Essay

Long ago, people wanted to acquire a better understanding of the beginning of the universe which ultimately resulted in the establishment of religions, beliefs and most pertinent, creation myths. Mythology provides explanations for the worlds mysteries especially in regards to the creation of Earth, Humans and the environment. This comparative paragraph analyzes the similarities and differences between a Greek myth entitled, The Beginning of Things, and a Chinese myth named, Heaven and Earth and Man, contrasted in the aspects of conflict, solutions, heroic action, and the education of the first humans. Conflicts arise for different concerns but after the battles cease, peace is restored because of supernatural intervention, the world advances and progresses to prevent future misfortune. Firstly, if peace is kept in the heavens of Greece then there will be less despair on Earth. The battle of authority results in a punishment system being enforced to confine cruel people and prevent rebellions. In ancient Greece there was a constant power struggle for the gods because of the underlying fear that their children would replace them in the chain of command. The text supports the argument of development and enhancement after unreasonable decisions are made by the deities; If any of them breaks the oath, for one year he lies breathless, and cannot partake of sweet nectar and ambrosia; after that year he is cut off from the meeting of the gods for nine years more, and then only may he come back and join their company. (Rouse, 3) During the destruction of the battles, evil is unleashed and causes chaos in the land. The justice system, which is created in response to Cronuses’ rebellion, is essential for any society to continue successfully. There is heroic involvement in both myths, with Zeus in particular in Ancient Greece. Zeus defeated his father and saved his brothers and sisters after being swallowed and trapped in his stomach. Cronuses’ awful deed deserves punishment which results in Zeus creating the Underworld and a standard of the amount of time spent punished. In fact, the Chinese story also includes a quarrel, different in rationale but improvement after the disagreement is a prevalent theme in both. Subsequently, in respect to the Chinese myth, after the war between fire and water, the pillar was destroyed; Nu-Kua repaired the gaps in the sky by supporting the sky with additional blocks. The literature provides evidence to confirm this line of reasoning; Block by block, she patched the holes in the sky. Lastly, she killed a giant turtle, and cut off its powerful legs to make pillars between which the sky is firmly held over the Earth, never again to fall. (Birch, 7) After chaos returns for the second time, when the elements fight against each other, involvement from spirits resolves the crisis and mitigates harm from humans. The irrational and aggressive clash between fire and water causes destruction but also provides reasoning for the position of the oceans and world geography. Apart from the similarities, there are many discrepancies circulating around the topic of conflict. In the Greek myth, conflicts originate from the desire to establish power and authority by rebelling. First, Cronus rebelled against his father Uranus and Zeus against Cronus followed. The competition is caused because children inherit their parents’ position and both gods prevent this from happening by swallowing or imprisoning them. On the contrary, the Chinese dispute is against the elements fire and water. In Chinese mythology, fire is masculine and symbolizes strength, aggression, impulsiveness. Water is considered feminine and symbolizes fluidity, downward energy but has the potential to be noisy. The conflict is probably caused because the elements are opposites and naturally enemies. This clash of the elements is a result of senseless hostility and not a fight for control. The difference in culture is what causes the significant differences in myths. Evidently, in Greek mythology acquiring status and supremacy is valued whereas there isn’t a sense of hierarchy but instead teamwork in China. According to the Asian myth, the spirits all work together towards a common goal which is to enhance and protect the Earth. Another obvious commonality in relation to either conflict is the presence of a supreme being which triggers and assists the chain of events which form the World. The Greek mythology had many different supreme beings which were responsible for various forces on Earth. The Chinese version, only included two main beings, one which was the result of the environment and the other was the creator of the human race. Comparative mythology also requires examining the distinction between the ideas of how both cultures though the Earth was created. An indication of how diverse the culture and beliefs of people is demonstrated in the topic of the formation of Human beings and the surrounding eco-system. The creation of humans, wildlife and geographic landscapes varies with the idea of the Greek Gods sculpting most organisms themselves whereas the Chinese believe Pan’Ku’s body transforms into the environment. The aspect of creation and the environment is portrayed very differently in both legends. The number of dissimilarities outweighs the number corresponding ideas surrounding the mystery of the beginning of the Universe and our existence. In ancient Greece, after a period of chaos and disagreement between the deities a clever titian named Prometheus establishes the first human and provides luminosity and warmth in a world, swallowed by darkness after the sun sets. Prometheus sculpts animals and accidentally, the first human out of clay and began to teach them how to survive including hunting and making fire; Prometheus was very much pleased with his new pet. He used to watch men hunting for food and living in caves and holes, like ants or badgers. He determined to educate men as well as he could. (Rouse, 2) After rebelling by taking responsibility for the Earth underneath the heavens, Prometheus entertains himself by making models out of clay. Accidently, he creates humans and spent most of his time teaching humans how to continue to exist. Prometheus sculpts humans by accident whereas Nu’Kua from the Chinese myth wants to produce beings that will aid to cure her solitary state. To contrast, in the Chinese myth, the weather conditions, mountains, rivers and vegetation are all created by Pan’Ku’s body. Additionally, after humans are created by Nu’Kua, they are taught many vital skills in addition to simply the ability to survive; â€Å"Who in his life [Pan’Ku] had brought shape to the universe, by his death gave his body to make it rich and beautiful†¦ to the Earth he gave his body† (Birch, 6). In the Chinese story, the environment is not created by a specific spirit but instead transforms from a god into the surrounding nature and landscape. A further comparison against the Greek tale is the little explanation about how the land and plants are created except for the separation of sky and ground which reveals an already existing ecosystem. Moreover, the humans in the Chinese myth are taught how to communicate, reproduce and to live in peace. The humans in ancient Greece are never taught skills beyond survival. Finally, there is an evident variation for the reasons to assemble humans. Nu’Kua intends to create a creature that will provide her relief from isolation meanwhile Prometheus is only amusing himself and the first human emerges entirely unintentionally. Nevertheless, both fairy-tales have a couple of resembling principles. To begin with, humans are formed and educated by the deities. The first humans were taught to hunt, gather food, and construct shelter to avoid perishing as a species. The principal objective is to aid humans to continue to populate and the justification in both fables was that supernatural intervention maintained the evolution of such a powerful species. Magical clay was used in both myths as the main material in the production of creatures and human beings. The motive for why these two parables are so similar is to emphasize how there is an external influence which assisted the formation of humans because it is difficult to believe that simple resources could have conceived such complex living, breathing creatures. Additionally, as a society in the present day, education is a requirement and essential for the genetic continuity of the human race, peace and maintenance of the Earth’s resources. By the means of education can one’s potential be used to maximum extent. It is natural for the authors of these short fictitious stories to assume the hero’s and goddesses teach humans because then there will be no foundation to carry on the sharing of lessons and information. In conclusion, it is in the nature of humans to wonder about the unknown and search for answers. At the foundation of nearly every culture is a creation myth which explains how the wonderful mysteries of the Earth came to be. Despite geographical barriers, many cultures have developed creation myths with the same basic elements and structure. However, there are many cultural and societal influences which cause variations in the beliefs and alter the overall creation myth from region to region. Apart from the fundamental similarities, the Greek and Chinese ideologies deviated in certain aspects of the myth because their values and morals as separate countries have impacted, adapted and evolved differently in response to world events.

Wednesday, October 23, 2019

Phosphine gas general info

Health Cl Extremely flammable Cl Very toxic by inhalation: syrnptoms usually occur within a few hours of exposure D Phosphine is irritating to the mucous membranes of the nose, mouth, throat and espiratory tract 0 Inhalation may result in weakness, chest tightness and pain, dry mouth, cough, sickness, vomiting, diarrhoea, chills, muscle pain, headache, dizziness, ataxia, confusion and lung damage. These symptoms may develop 2-3 days after exposure 0 Severe poisoning may result in increased heart rate, low blood pressure, convulsions, coma, heart damage and death.These symptoms usually within 4 days but may be delayed up to 1-2 weeks C] Exposure to the eyes or skin may cause Irritation 0 Long-term exposure may cause anaemla, bronchltls, gastrointestinal disorders, peech and motor problems, toothache, weakness, weight loss, swelling and damage of the jaw bone and spontaneous fractures 0 Phosphine has not been associated with cancer 0 Phosphine is not likely to cause reproductive or dev elopmental effects Environment 0 Dangerous for the Environment 0 Inform Environment Agency of substantial release incidents Prepared by L Assem & M Takamiya Institute of Environment and Health Cranfield University 2007 Version 1 Background Phosphine is a colourless gas, which is slightly heavier than air.It usually smells of garlic or rotting fish due to the presence of ontaminants but pure phosphine is odourless. is extremely flammable and highly reactive with air, copper and copper-containing alloys. exposed to higher levels of phosphine, although occupational incidents involving exposure to phosphine are rare, and safety levels are in place to protect employees. Phosphine is rarely found in nature. Small amounts can be formed during the breakdown of organic matter, although it is rapidly degraded. Phosphine is released into the air via emissions from various manufacturing processes and from the use of metal (magnesium, aluminium and zinc) phosphide umigants and pesticides, which release phosphine on contact with water or acid.The major uses of phosphine are as a fumigant during the storage of agricultural products such as nuts, seeds, grains, coffee and tobacco, and in the manufacture of semi-conductors. Phosphine is also used in the production of some chemicals and metal alloys and is an unintentional by-product in the illegal manufacture of the drug methamphetamine. Inhalation is the most likely route of exposure to phosphine, although ingestion of metal phosphides may also occur. Symptoms are non-specific and include irritation of the espiratory tract, headaches, dizziness, abdominal pain, sickness, and vomiting. convulsions, damage to the lungs, heart, liver and kidney, and death. Long-lasting effects of single dose exposure are unlikely, most symptoms clearing within a month.Long-term exposure to phosphine, while unlikely to occur, can cause bronchitis, gastrointestinal, visual, speech and motor problems, toothache, swelling of the Jaw, anaemia and spo ntaneous fractures. Children exposed to phosphine will have the same symptoms of poisoning as adults. Phosphine is not likely to cause harm to the nborn child as acute effects are not known to cause developmental effects. Phosphine is rapidly broken down in the environment and it is very unlikely that the general population will be exposed to sufficient levels of phosphine to cause health effects. However, people may be exposed to very small amounts of phosphine present in air, food and water. Phosphine has not been associated with cancer and has not reviewed by the International Agency for Research on Cancer.Workers employed as fumigators, pestcontrol operators, transport workers and those involved in the production or use of hosphine and metal phosphides (welding, metallurgy, semi-conductors), may be General information: Page 2 of 5 PHOSPHINE – GENERAL INFORMATION Production and Uses Phosphine is present in emissions from some industrial processes such as the manufacture of some chemicals and metal alloys of metal phosphides) and as a catalyst and in the production of polymers The main uses of phosphine are as a chemical dopant in the manufacture of semiconductors for the electronics industry, and in the fumigation (in the form of metal phosphides) of stored agricultural products such as cereal grains and tobacco. Phosphine is also used as a condensation catalyst and in the manufacture of some polymers. Zinc phosphide is used as a rodenticide in the form of a pellet or as a paste mixed with food. Small amounts of phosphine are produced in the production of chemicals such as phosphonium halide and acetylene gas. General information: Page 3 of 5 Frequently Asked Questions What is phosphine? Phosphine is a colourless gas which is highly flammable and explosive in air.Pure phosphine is odourless, although most commercially available grades have the odour of garlic or decaying fish. Small amounts of phosphine can occur naturally, formed uring the anaerobic degradation of organic matter. Phosphine is corrosive towards metals, in particular copper and copper-containing alloys. What is phosphine used for? A major use of phosphine is as a semi-conductor doping agent by the electronics industry. Metal (aluminium, magnesium and zinc) phosphides, which release phosphine on contact with moisture and acid, are used as rodenticides and fumigates during storage of agricultural commodities such as grain e. g. cereals, and tobacco. Phosphine is also used as a catalyst and in the production of polymers.How does phosphine get into the environment? Small amounts of phosphine occur naturally during the decomposition of phosphorouscontaining organic matter e. g. in marsh gas. Emissions and effluents from the manufacture of some chemicals and metal alloys, as well the production or use of phosphine and metal phosphides (welding, metallurgy, semi-conductors, rodenticides and fumigants), release phosphine into the air. How will I be exposed to phosphine? It is unlikely that the general population will be exposed to significant amounts of phosphine, since it is degraded quickly in the environment; the half-life of phosphine in the air is about one day or less.However, people may be exposed to very small amounts by inhaling air, drinking water and eating food containing phosphine. Workers involved with industries and processes where phosphine is used, e. g. fumigation and pest control, may be exposed to higher levels of phosphine. People living nearby sites where phosphine is being used may also be exposed to small amounts of phosphine in the air. Phosphine gas does not present a risk of secondary contamination, although solid phosphides may pose some risk. Absorption though the skin is not considered a significant route of exposure. If there is phosphine in the nvironment does not always lead to exposure. Clearly, in order for phosphine to cause any adverse health effects you must come into contact with it.You may be exposed by brea thing, eating, or drinking the substance or by skin contact. Following exposure to any chemical, the adverse health effects you may encounter depend on several factors, including the amount to which you are exposed (dose), the way you are exposed, the duration of exposure, the form of the chemical and if you are exposed to any other chemicals. Exposure to phosphine or metal phosphides can be irritating to the respiratory tract nd can cause weakness, chest pain and tightness, dry mouth, cough, sickness, vomiting, diarrhoea, chills, muscle pain, headache, dizziness, ataxia and confusion. Severe cases may lead to lung damage, convulsions, damage to the heart, liver and kidney, and death.General information: Page 4 of 5 Long-term exposure to low levels of phosphine can cause anaemia, bronchitis, gastrointestinal problems, visual, speech and motor problems, toothache, swelling of the Jaw and spontaneous fractures. Can phosphine cause cancer? The Governmental Committee on Mutagenicity rec ently reviewed the available data n carcinogenicity of phosphine and concluded that it did not cause cancer in animal studies. Phosphine has not been reviewed by the International Agency for Research on Cancer (‘ARC), and the US Environmental Protection Agency (US EPA) considers phosphine as not classifiable as to human carcinogenicity, due to inadequate animal studies and a lack of human tumour data. Does phosphine affect children or damage the unborn child?Children who ingest metal phosphides or inhale phosphine gas are expected to have similar symptoms as adults, e. g. sickness, vomiting, headache, dizziness, in severe ases leading to damage to the lungs, heart, liver and kidney and death. There is no evidence to suggest that maternal exposure to phosphine affects the health of the unborn child. What should I do if I am exposed to phosphine? It is very unlikely that the general population will be exposed to a level of phosphine high enough to cause adverse health effects. T his document from the HPA Centre for Radiation, Chemical and Environmental Hazards reflects understanding and evaluation of the current scientific evidence as presented and referenced in this document.

Tuesday, October 22, 2019

The Pendulum essays

The Pendulum essays Galileo Galilee has expansively performed experiments on pendulums throughout his life, and has researched the parameters and characteristics of their motion. Through further investigation of the Parameters of the Pendulums, he was able to use them as time measurement devices later in his career. These Parameters included how the period of the pendulum is independent from its bob weight, how the period of the pendulum is independent of the amplitude or angle, and how the length of the pendulum varies with the period. Furthermore, through my experiments, I attempted to investigate these parameters, and delve into how the equation of the period in a pendulum functions. Moreover, these experiments too examine Galileo's trials on the conservation of energy and gravity in a pendulum. In Galileo's "Dialogues Concerning Two New Sciences", he conducts experiments on how the period is independent of the bob weight. Moreover, In my experiment on how mass affects the period of the pendulum, different weights were placed on a string of the same length and amplitude. They were both suspended and dropped from an angle of 90 degrees and found to have approximately the same period. Galileo also performed this experiment in his "Dialogues Concerning Two New Sciences", by using Cork and Lead pendulums of the same length and hung them from his ceiling and measured the periods. For five of his trials, the cork was allowed to travel through ten oscillations and further compared to the number of oscillations of the leader during that time, and then this process was reversed for the two weights. Galileo further confirms his conclusion on how the mass affects his period " If two balls, one of lead and one of cork, the former more than a hundred times heavier than the latter, and suspended them by means of two equal fine threads, each four to five cubits long. This free vibration repeated many times showed clearly that the heavi...

Sunday, October 20, 2019

Resume Tips Part 1 Words and Phrases to Delete from Your Resume

Resume Tips Part 1 Words and Phrases to Delete from Your Resume When I review resumes I find many commonly used words and phrases that are either outright erroneous or simply useless on a resume. I hope this short series of resume tips will decrease the appearance of these words on resumes throughout the job-hunting market. Words to delete from your resume: Various, variety, etc. 1. Various (or â€Å"a variety of†). Compare: a. Performed legal research and wrote memoranda and briefs on various civil procedural and substantive issues. b. Performed legal research and wrote memoranda and briefs on civil procedural and substantive issues, including unconscionability, issue preclusion and equitable estoppel. Version a. leaves us with nothing to grab on to. The candidate in version b. sounds a lot more interesting doesn’t she? The trick is to list the actual things that constitute the variety. Variety on its own doesnt tell us much. I acknowledge that there might be exceptions to this rule. Sometimes it does work to use the word â€Å"various† or â€Å"variety.† My recommendation is to take it out and see if the bullet works better. It probably will. Please report back what you discover. 2. Etc. â€Å"Etc.† is just a variation on various. Example: a. Managed, developed, and supervised programmatic activities that reduce recidivism through individual counseling, mentoring, family supportive counseling, girls empowerment groups, life skills classes, leadership workshops, etc. This list is long enough. What could possibly be added by adding â€Å"etc.† to the end of it? It just leaves the reader hanging. Make your list, put an â€Å"and’ before the last item, and add to it in your interview if necessary. You are welcome to precede your list with â€Å"including† or â€Å"such as.† OK now go look at your resume. Did you find various or etc.? Did you delete them? What was the impact? Share the results in the comments below. Hope you found these resume tips useful. For more resume writing assistance, check out  The Essay Experts Resume Writing Services.

Saturday, October 19, 2019

A Synopsis of the Movie The Hurt Locker Essay Example for Free

A Synopsis of the Movie The Hurt Locker Essay ? How do at least two of the following production elements combine to engage audiences with what is occurring one scene or sequence of the narrative you have studied. Scene 6 escalates the growing tension between James and Sanborn through sound and camera work. The audience assumes that the situation is extremely dangerous at the beginning of the scene due to the air raid siren sounds and the long shot establishing the evacuation of the UN building. This chaos is further emphasised by the shaky camera and the stressed voice tones in the dialogue of Sanborn and Eldridge. The audience’s tension is then released by James’ sigh as he puts out the burning car fire. The chaotic tension between the team members and their interrelationships is highlighted by the quick, hasty shots between the three as they attempt to access the bomb. The audience is further involved in this scene even more so, being positioned as the fourth team member, often by shots through the scope of a gun and through the bomb suit mask. By involving the audience, Bigelow is able to further engage the audience with the relationships between the characters. Bigelow makes the audience on edge during this scene through the highlighted breaths and sighs of Jeremy Renner. The sighs are often used to release tension between the characters and the audience, in particular when Renner defuses the bomb and he receives his adrenaline rush. Renner also exhales heavily when searching the car for the bomb, emphasising the frustration and inability to find the bomb. This allows the movie goers to feel James’ adrenaline rush kicking in as well. When Renner cuts the seat of the car, the shot creates a sound edit. The audience is still experiencing the high emotions from the intercutting shots between Sanborn and James when the frame cuts to black and the sound of the cutting shocks the audience into believing that someone has been shot. Fittingly, the ripping of the material by Renner further establishes his character as a stupid or blindly courageous character, making the audience less favourable of him as he is endangering his own life and the lives of Sanborn and Eldridge. A Synopsis of the Movie The Hurt Locker. (2016, Dec 09).

Friday, October 18, 2019

The Story of Continental Airlines remarkable turnaround in 1994 is Essay

The Story of Continental Airlines remarkable turnaround in 1994 is well known in business policy and strategy classes worldwide - Essay Example Continental Airlines is presently America’s fifth largest airline that carries around 50 million passengers a year across the globe to more than 227 destinations. But a couple of decades back in 1993, it was facing its third and final bankruptcy when the new CEO Gordon Bethune and Consultant, Brenneman created history in the turnaround of Continental Airlines. Their leadership initiatives turned the loss of $613 million in 1994 to $224 million profit in 1995 (Brenneman, 1998). The discontented and highly demotivated workforce became the major enabling elements of success that contributed to its renewal. The turnaround strategy was critical factor that was conceptualized around four simple strategic principles that required strong belief, persistence and constant motivation. The creative approach highlighted leaders’ vision which was used to inspire the workforce for higher productive outcome. Turnaround strategy of Bethune and Brenneman was mainly focused on how the firm’s falling fortunes can be turned around into success. They did not place undue emphasis on cost cutting but rather made judicious plans for building strong team of high performance members who believed in collective actions and shared goals. The leadership of new management was exemplary in their forward looking outlook and expedited the process of recovery with single focus on defined goals and objectives. The strategies that were implemented are as follows: 1. Strategic action plans for recovery Bateman and Snell (2009:132) assert that strategic plans facilitate organizations to be innovative and develop linkages to meet the needs of the markets. Flexibility of approach and well laid out plans provide firms with clear direction for the future (Montgomery, 2008). In Continental case, action plan was made with feedback from the customers and employees, which was communicated across the workforce. The recovery plan was distinct in its simple targets but strict in its timeframe so that recovery could be fast. The necessity of fast action was communicated to workforce so that they could understand and become proactive in making it a success. It worked in Continental case because the high frustration in the workforce was mainly due to unclear and frequently changing strategies of the past. The lack of concise directions and target had led to disillusionment, adversely impacting their motiv ation for higher achievement. The new plans were clearly defined by the management and communicated on regular basis to the workforce that helped to strengthen their confidence and motivated them to work towards the goals with renewed enthusiasm. 2. Leadership initiative and team building Drucker (1999) believes that external and internal environment hugely contributes to business performance and managerial leadership innovatively exploits them for the organization’s advantage. The open communication approach used by Bethune and Brenneman helped in adapting to the strategic changes that were introduced to transform business dynamics. Leadership initiative is critical factor to develop an organizational culture of proactive participation, shared learning and strong teamwork (Shapiro, Slywotzky and Tedlow, 2000). The leaders looked for opportunities and exploited them with a sense of high urgency through a team of motivated

Case 7 Study Example | Topics and Well Written Essays - 500 words

7 - Case Study Example CVS Corporation’s mission is to improve the livelihoods of its customers through innovation and provision of exceptional health and pharmacy services to enhance safety, affordability, and ease of access. The company’s economical audit performance will determine the devolution of its strategies. Other sources of funding, however, are to implement crucial strategies. The company’s strategic plans, however, are uncertain. Emergence of new Strategies that prove vital may call for refunding. The company aims at strengthening its position in the market. Despite the economic constraints experienced in the past two years, the company is still acquiring strategies to acquire new market. The company also intends to diversify its services in the market. Because of the high rate of technological growth in the world, the company intends to digitize its service delivery. Subsequent innovations aimed at improving the company’s performance will couple technological establishment. The implementation would be parallel to online sales for the company. The company identified a seasonal trend in their sales in the previous years. To counter that, the company aims at acquiring different companies. The companies will relate directly or indirectly to CVS. The company is developing strategies to enable it develop its own product brand. One of the objectives of CVS is to achieve a global expansion. This strategy implementation will enable CVS to venture into the foreign markets outside the U.S. This will be vital in the stabilization of the capital flow for the company. A license will be vital in this expansion for it will enable the company to take the full risk of the international market. Compared to the previous case study, this current study acquires the form of a business plan. The vision and objectives of the company provide an insight as to where the company is heading. The efforts required to achieve the company’s mission reflect the

How far did the Anti-Saloon League contribute to prohibition becoming Essay

How far did the Anti-Saloon League contribute to prohibition becoming active in the USA in the 1920's - Essay Example Drinking in those days started to become popular and soon men started spending more time in saloons and pubs than their own homes. The interest of the family was often affected by the habits of men who took to drinking and the effects of alcoholism soon started reverberating in the modern society. With pressures of life mounting in the towns and cities, it became fashionable for men to display their machoism by immersing themselves in booze and smoke. Other than machoism, visiting the saloon started to be considered as a social requirement. It was considered as a place where a man could enhance his awareness and also eat and booze cheaply. â€Å"The saloonkeeper is the only man who keeps open house in the ward. It is his business to entertain. It does not matter that he does not select his guests; that convention is useless among them. In fact, his democracy is one element of his strength. His face is the common meeting ground of his neighbours - and he supplies the stimulus which r enders social life possible; there is an accretion of intelligence that comes to him in his business. He hears the best stories. He is the first to get accurate information as to the latest political deals and social mysteries. The common talk of the day passes through his ears and he is known to retain that which is the most interesting.† (Moore 1897). It was the later part of the 1800s that the sentiment against alcoholism slowly started to catch up in the American society. People awakened to the effect of the drink and taking a cue from families that were often deprived of basic necessities because of their breadwinner’s drinking habits, began to assimilate ideas against alcoholism. Even though many anti-alcoholic forums were active in those days, the Anti-Saloon league became a force to reckon with and soon played a major in changing public opinion about alcoholism in the country. The league went on to become so powerful that it

Thursday, October 17, 2019

Organisations competition business environment Essay

Organisations competition business environment - Essay Example This is not a war but the language of business is filled with win-lose terms. An organisation wins a game, beats the other sales. This is a daily practice and we go through everyday with these types of competitive activities. A unique characteristic of global competition is that it is a closed text. This competition adopts a signification of the underlying model that justifies contemporary strategies of businesses. However, critics of competition have always argued that competition should be avoided because of its negative effects on the performance of organisations. They are of the view that that competition can result in nervousness with high anxiety levels, lesser productivity, de-motivation by those who believe they have no chance of winning, extrinsic motivation, contingent self-esteem that goes up and down depending on how one's performance compares with that of others, bad relationships, aggression toward others in an attempt to win at all costs, and fraud. "The outcomes of co mpetition are seen as so destructive by some individuals that they have proposed eliminating it altogether, especially from the workplace" (Maehr & Midgley 399-427). But the success stories of different organisations tell us that competitive experiences have always been perceived to be healthy for businesses. The macroeconomic theory of global trade recognizes competition as a driving force. Boehm develops a framework for five forces driving competition among human service organisations: (1) rivalry among existing organisations; (2) the presence of substitute services in the market; (3) the bargaining power of suppliers; (4) the bargaining power of consumers; (5) the threat of entrance by new organisations (Boehm 61-78). In the international trade nations cannot have competitive advantage in all goods and services, but they have to compete with others even in fields of their excellence. According to the story of global version of competition, the signals that organisations receive have a restricted interpretation. Firms are caught in an algorithm that demands top interests of stockholder. In result, firms adopt a strategy of raising productivity and reducing the cost. A nexus is depicted between signals, incentives and rational behaviour. Signals acquire the form of relative prices. Profits provide the inducement to perform on signals. To behave rationally is to reply with an action to them. Since 1980 the pace in the global competitiveness is very fast and firms around the globe have been experiencing different types of competitions. By summarising the story of global version of competition, it can be said that at present the speed of change is extraordinary. In the whole situation information technology, globalisation of world finances and markets' deregulation have played a great role and provided a new shape to the competitions of organizations. The situation has provided a great benefit to developing countries. Several firms of developed nations are experiencing a shock of supply from their counterparts in the developing world. As the transfer of capital and technology from developed world is no more a problem, developing countries are competing with the firms developed nations. Low wages is also a strong tool in the hand of developing countries to give tough competition to the firms of industrially advanced countries. And this is also a

Chemistry of Hazardous Materials Case Study Example | Topics and Well Written Essays - 500 words

Chemistry of Hazardous Materials - Case Study Example The characteristics of the hazardous substance in the four containers are determined through NFPA system. The system gives the procedure of identifying the relative levels of the three hazards; chemical reactivity, health, and flammability (Meyer, 2010). The HazMat team experiences several hazardous situations. The puncture tank may contain flammable fumes or chemicals thus it is hazardous because it may result in combustion. The corrosive materials can also negatively affect members of the HazMat team the fumes can cause skin irritation, and respiratory tract infection and inflammation. Strong oxidizing substances have the ability to corrode and thus if not handled properly can burn the skin tissues. Strong acids and bases also show corrosive characteristics and may burn the skin. All the three substances engage in chemical reactions that can produce dangerous substances, like hazardous fumes that cause respiratory tract irritation (Meyer, 2010). A lot of restraint should be exercis ed at the accident scene. Only members of the HazMat team should be allowed to access the site with the punctured tank and the three other tanks containing hazardous materials (Meyer, 2010). The members of the team must wear full protective gear. The gas masks are aimed are preventing the inhalation of dangerous fumes or hazardous chemicals. The reflectors jackets prevent skin contact with the hazardous materials. The other individuals who wish or want to access the accident scene must be told to wear adequate personal protective types of equipment. The protective types of equipment include; eyeglasses, gas masks, reflector or dust coats, gloves, and safety boots. The eyeglass minimizes eye irritation, through minimizing the contact between the eye and hazardous fumes. The reflector clothing reduces corrosion by hindering contact between the skin and the hazardous substances.

Wednesday, October 16, 2019

How far did the Anti-Saloon League contribute to prohibition becoming Essay

How far did the Anti-Saloon League contribute to prohibition becoming active in the USA in the 1920's - Essay Example Drinking in those days started to become popular and soon men started spending more time in saloons and pubs than their own homes. The interest of the family was often affected by the habits of men who took to drinking and the effects of alcoholism soon started reverberating in the modern society. With pressures of life mounting in the towns and cities, it became fashionable for men to display their machoism by immersing themselves in booze and smoke. Other than machoism, visiting the saloon started to be considered as a social requirement. It was considered as a place where a man could enhance his awareness and also eat and booze cheaply. â€Å"The saloonkeeper is the only man who keeps open house in the ward. It is his business to entertain. It does not matter that he does not select his guests; that convention is useless among them. In fact, his democracy is one element of his strength. His face is the common meeting ground of his neighbours - and he supplies the stimulus which r enders social life possible; there is an accretion of intelligence that comes to him in his business. He hears the best stories. He is the first to get accurate information as to the latest political deals and social mysteries. The common talk of the day passes through his ears and he is known to retain that which is the most interesting.† (Moore 1897). It was the later part of the 1800s that the sentiment against alcoholism slowly started to catch up in the American society. People awakened to the effect of the drink and taking a cue from families that were often deprived of basic necessities because of their breadwinner’s drinking habits, began to assimilate ideas against alcoholism. Even though many anti-alcoholic forums were active in those days, the Anti-Saloon league became a force to reckon with and soon played a major in changing public opinion about alcoholism in the country. The league went on to become so powerful that it

Chemistry of Hazardous Materials Case Study Example | Topics and Well Written Essays - 500 words

Chemistry of Hazardous Materials - Case Study Example The characteristics of the hazardous substance in the four containers are determined through NFPA system. The system gives the procedure of identifying the relative levels of the three hazards; chemical reactivity, health, and flammability (Meyer, 2010). The HazMat team experiences several hazardous situations. The puncture tank may contain flammable fumes or chemicals thus it is hazardous because it may result in combustion. The corrosive materials can also negatively affect members of the HazMat team the fumes can cause skin irritation, and respiratory tract infection and inflammation. Strong oxidizing substances have the ability to corrode and thus if not handled properly can burn the skin tissues. Strong acids and bases also show corrosive characteristics and may burn the skin. All the three substances engage in chemical reactions that can produce dangerous substances, like hazardous fumes that cause respiratory tract irritation (Meyer, 2010). A lot of restraint should be exercis ed at the accident scene. Only members of the HazMat team should be allowed to access the site with the punctured tank and the three other tanks containing hazardous materials (Meyer, 2010). The members of the team must wear full protective gear. The gas masks are aimed are preventing the inhalation of dangerous fumes or hazardous chemicals. The reflectors jackets prevent skin contact with the hazardous materials. The other individuals who wish or want to access the accident scene must be told to wear adequate personal protective types of equipment. The protective types of equipment include; eyeglasses, gas masks, reflector or dust coats, gloves, and safety boots. The eyeglass minimizes eye irritation, through minimizing the contact between the eye and hazardous fumes. The reflector clothing reduces corrosion by hindering contact between the skin and the hazardous substances.

Tuesday, October 15, 2019

Chemistry Life in Daily Life Essay Example for Free

Chemistry Life in Daily Life Essay Introduction: Fluorine has the distinction of being the most reactive of all the elements, with the highest electronegativity value on the periodic table. Because of this, it proved extremely difficult to isolate. Davy first identified it as an element, but was poisoned while trying unsuccessfully to decompose hydrogen fluoride. Two other chemists were also later poisoned in similar attempts, and one of them died as a result. French chemist Edmond Fremy (1814-1894) very nearly succeeded in isolating fluorine, and though he failed to do so, he inspired his student Henri Moissan (1852-1907) to continue the project. One of the problems involved in isolating this highly reactive element was the fact that it tends to attack any container in which it is placed: most metals, for instance, will burst into flames in the presence of fluorine. Like the others before him, Moissan set about to isolate fluorine from hydrogen fluoride by means of electrolysis—the use of an electric current to cause a chemical reaction—but in doing so, he used a platinum-iridium alloy that resisted attacks by fluorine. In 1906, he received the Nobel Prize for his work, and his technique is still used today in modified form. Properties And Uses Of Fluorine: A pale green gas of low density, fluorine can combine with all elements except some of the noble gases. Even water will burn in the presence of this highly reactive substance. Fluorine is also highly toxic, and can cause severe burns on contact, yet it also exists in harmless compounds, primarily in the mineral known as fluorspar, or calcium fluoride. The latter gives off a fluorescent light (fluorescence is the term for a type of light not accompanied by heat), and fluorine was named for the mineral that is one of its principal hosts. Beginning in the 1600s, hydrofluoric acid was used for etching glass, and is still used for that purpose today in the manufacture of products such as light bulbs. The oil industry uses it as a catalyst—a substance that speeds along a chemical reaction—to increase the octane number in gasoline. Fluorine is also used in a polymer commonly known as Teflon, which provides a non-stick surface for frying pans and other cooking-related products. Just as chlorine saw service in World War I, fluorine was enlisted in World War II to create a weapon far more terrifying than poison gas: the atomic bomb. Scientists working on the Manhattan Project, the United States effort to develop the bombs dropped on Japan in 1945, needed large quantities of the uranium-235 isotope. This they obtained in large part by diffusion of the compound uranium hexafluoride, which consists of molecules containing one uranium atom and six fluorine anions. Fluoridation Of Water: Long before World War II, health officials in the United States noticed that communities having high concentration of fluoride in their drinking water tended to suffer a much lower incidence of tooth decay. In some areas the concentration of fluoride in the water supply was high enough that it stained peoples teeth; still, at the turn of the century—an era when dental hygiene as we know it today was still in its infancy—the prevention of tooth decay was an attractive prospect. Perhaps, officials surmised, it would be possible to introduce smaller concentrations of fluoride into community drinking water, with a resulting improvement in overall dental health. After World War II, a number of municipalities around the United States ndertook the fluoridation of their water supplies, using concentrations as low as 1 ppm. Within a few years, fluoridation became a hotly debated topic, with proponents pointing to the potential health benefits and opponents arguing from the standpoint of issues not directly involved in science. It was an invasion of personal liberty, they said, for governments to force citizens to drink water which had been supplemented with a foreign substance. During the 1950s, in fact, fluoridation became associated in some circles with Communism—just another manifestation of a government trying to control its citizens. In later years, ironically, antifluoridation efforts became associated with groups on the political left rather than the right. By then, the argument no longer revolved around the issue of government power; instead the concern was for the health risks involved in introducing a substance lethal in large doses. Fluoride had meanwhile gained application in toothpastes. Colgate took the lead, introducing stannous fluoride in 1955. Three years later, the company launched a memorable advertising campaign with commercials in which a little girl showed her mother a report card from the dentist and announced Look, Ma!  No cavities! Within a few years, virtually all brands of toothpaste used fluoride; however, the use of fluoride in drinking water remained controversial. As late as 1993, in fact, the issue of fluoridation remained heated enough to spawn a study by the U. S. National Research Council. The council found some improvement in dental health, but not as large as had been claimed by early proponents of fluoridation. Furthermore, this improvement could be explained by reference to a number of other factors, including fluoride in toothpastes and a generally heightened awareness of dental health among the U.  S. populace. Chlorofluorocarbons : Another controversial application of fluorine is its use, along with chlorine and carbon, in chlorofluorocarbons. As noted above, CFCs have been used in refrigerants and propellants; another application is as a blowing agent for polyurethane foam. This continued for several decades, but in the 1980s, environmentalists became concerned over depletion of the ozone layer high in Earths atmosphere. Unlike ordinary oxygen (O 2 ), ozone or O 3 is capable of absorbing ultraviolet radiation from the Sun, which would otherwise be harmful to human life. It is believed that CFCs catalyze the conversion of ozone to oxygen, and that this may explain the ozone hole, which is particularly noticeable over the Antarctic in September and October. As a result, a number of countries signed an agreement in 1996 to eliminate the manufacture of halocarbons, or substances containing halogens and carbon. Manufacturers in countries that signed this agreement, known as the Montreal Protocol, have developed CFC substitutes, most notably hydrochlorofluorocarbons (HCFCs), CFC-like compounds also containing hydrogen atoms. The ozone-layer question is far from settled, however. Critics argue that in fact the depletion of the ozone layer over Antarctica is a natural occurrence, which may explain why it only occurs at certain times of year. This may also explain why it happens primarily in Antarctica, far from any place where humans have been using CFCs. (Ozone depletion is far less significant in the Arctic, which is much closer to the population centers of the industrialized world. ) In any case, natural sources, such as volcano eruptions, continue to add halogen compounds to the atmosphere. Introduction: Chlorine is a highly poisonous gas, greenish-yellow in color, with a sharp smell that induces choking in humans. Yet, it can combine with other elements to form compounds safe for human consumption. Most notable among these compounds is salt, which has been used as a food preservative since at least 3000 B. C. Salt, of course, occurs in nature. By contrast, the first chlorine compound made by humans was probably hydrochloric acid, created by dissolving hydrogen chloride gas in water. The first scientist to work with hydrochloric acid was Persian physician and alchemist Rhazes (ar-Razi; c. 64-c. 935), one of the most outstanding scientific minds of the medieval period. Alchemists, who in some ways were the precursors of true chemists, believed that base metals such as iron could be turned into gold. Of course this is not possible, but alchemists in about 1200 did at least succeed in dissolving gold using a mixture of hydrochloric and nitric acids known as aqua regia. The first modern scientist to work with chlorine was Swedish chemist Carl W. Scheele (1742-1786), who also discovered a number of other elements and compounds, including barium, manganese, oxygen, ammonia, and glycerin. However, Scheele, who isolated it in 1774, thought that chlorine was a compound; only in 1811 did English chemist Sir Humphry Davy (1778-1829) identify it as an element. Another chemist had suggested the name halogen for the alleged compound, but Davy suggested that it be called chlorine instead, after the Greek word chloros , which indicates a sickly yellow color. Uses Of Chlorine: The dangers involved with chlorine have made it an effective substance to use against stains, plants, animals—and even human beings. Chlorine gas is highly irritating to the mucous membranes of the nose, mouth, and lungs, and it can be detected in air at a concentration of only 3 parts per million (ppm). The concentrations of chlorine used against troops on both sides in World War I (beginning in 1915) was, of course, much higher. Thanks to the use of chlorine gas and other antipersonnel agents, one of the most chilling images to emerge from that conflict was of soldiers succumbing to poisonous gas. Yet just as it is harmful to humans, chlorine can be harmful to microbes, thus preserving human life. As early as 1801, it had been used in solutions as a disinfectant; in 1831, its use in hospitals made it effective as a weapon against a cholera epidemic that swept across Europe. Another well-known use of chlorine is as a bleaching agent. Until 1785, when chlorine was first put to use as a bleach, the only way to get stains and unwanted colors out of textiles or paper was to expose them to sunlight, not always an effective method. By contrast, chlorine, still used as a bleach today, can be highly effective—a good reason not to use regular old-fashioned bleach on anything other than white clothing. Since the 1980s, makers of bleaches have developed all-color versions to brighten and take out stains from clothing of other colors. ) Calcium hydrocholoride (CaOCl), both a bleaching powder and a disinfectant used in swimming pools, combines both the disinfectant and bleaching properties of chlorine. This and the others discussed here are just some of many, many compounds formed with the highly reactive element chlorine. Particularly notable—and controversial—are compounds involving chlorine and carbon. Chlorine And Organic Compounds: Chlorine bonds well with organic substances, or those containing carbon. In a number of instances, chlorine becomes part of an organic polymer such as PVC (polyvinyl chloride), used for making synthetic pipe. Chlorine polymers are also applied in making synthetic rubber, or neoprene. Due to its resistance to heat, oxidation, and oils, neoprene is used in a number of automobile parts. The bonding of chlorine with substances containing carbon has become increasingly controversial because of concerns over health and the environment, and in some cases chlorine-carbon compounds have been outlawed. Such was the fate of DDT, a pesticide soluble in fats and oils rather than in water. When it was discovered that DDT was carcinogenic, or cancer-causing, in humans and animals, its use in the United States was outlawed. Other, less well-known, chlorine-related insecticides have likewise been banned due to their potential for harm to human life and the environment. Among these are chlorine-containing materials once used for dry cleaning. Also notable is the role of chlorine in chlorofluorocarbons (CFCs), which have been used in refrigerants such as Freon, and in propellants for aerosol sprays.  CFCs tend to evaporate easily, and concerns over their effect on Earths atmosphere have led to the phasing out of their use. Introduction: Bromine is a foul-smelling reddish-brown liquid whose name is derived from a Greek word meaning stink. With a boiling point much lower than that of water—137. 84 °F (58. 8 °C)—it readily transforms into a gas. Like other halogens, its vapors are highly irritating to the eyes and throat. It is found primarily in deposits of brine, a solution of salt and water. Among the most significant brine deposits are in Israels Dead Sea, as well as in Arkansas and Michigan. Credit for the isolation of bromine is usually given to French chemist Antoine-Jerome Balard (1802-1876), though in fact German chemist Carl Lowig (1803-1890) actually isolated it first, in 1825. However, Balard, who published his results a year later, provided a much more detailed explanation of bromines properties. The first use of bromine actually predated both men by several millennia. To make their famous purple dyes, the Phoenicians used murex mollusks, which contained bromine. (Like the names of the halogens, the word Phoenicians is derived from Greek—in this case, a word meaning red or purple, which referred to their dyes. Today bromine is also used in dyes, and other modern uses include applications in pesticides, disinfectants, medicines, and flame retardants. At one time, a compound containing bromine was widely used by the petroleum industry as an additive for gasoline containing lead. Ethylene dibromide reacts with the lead released by gasoline to form lead bromide (PbBr 2 ), referred to as a scavenger, because it tends to clean the emissions of lead-containing gasoline. However, leaded gasoline was phased out during the late 1970s and early 1980s; as a result, demand for ethylene dibromide dropped considerably. Halogen Lamps: The name halogen is probably familiar to most people because of the term halogen lamp. Used for automobile headlights, spotlights, and floodlights, the halogen lamp is much more effective than ordinary incandescent light. Incandescent heat-producing light was first developed in the 1870s and improved during the early part of the twentieth century with the replacement of carbon by tungsten as the principal material in the filament, the area that is heated. Tungsten proved much more durable than carbon when heated, but it has a number of problems when combined with the gases in an incandescent bulb. As the light bulb continues to burn for a period of time, the tungsten filament begins to thin and will eventually break. At the same time, tungsten begins to accumulate on the surface of the bulb, dimming its light. However, by adding bromine and other halogens to the bulbs gas filling—thus making a halogen lamp—these problems are alleviated. As tungsten evaporates from the filament, it combines with the halogen to form a gaseous compound that circulates within the bulb. Instead of depositing on the surface of the bulb, the compound remains a gas until it comes into contact with the filament and breaks down. It is then redeposited on the filament, and the halogen gas is free to combine with newly evaporated tungsten. Though a halogen bulb does eventually break down, it lasts much longer than an ordinary incandescent bulb and burns with a much brighter light. Also, because of the decreased tungsten deposits on the surface, it does not begin to dim as it nears the end of its life. Introduction: First isolated in 1811 from ashes of seaweed, iodine has a name derived from the Greek word meaning violet-colored—a reference to the fact it forms dark purple crystals.  During the 1800s, iodine was obtained commercially from mines in Chile, but during the twentieth century wells of brine in Japan, Oklahoma, and Michigan have proven a better source. Uses And Applications: Among the best-known properties of iodine is its importance in the human diet. The thyroid gland produces a growth-regulating hormone that contains iodine, and lack of iodine can cause a goiter, a swelling around the neck. Table salt does not naturally contain iodine; however, sodium chloride sold in stores usually contains about 0. 01% sodium iodide, added by the manufacturer. Iodine was once used in the development of photography: During the early days of photographic technology, the daguerreotype process used silver plates sensitized with iodine vapors. Iodine compounds are used today in chemical analysis and in synthesis of organic compounds. Introduction: Just as fluorine has the distinction of being the most reactive, astatine is the rarest of all the elements. Long after its existence was predicted, chemists still had no luck finding it in nature, and it was only created in 1940 by bombarding bismuth with alpha particles (positively charged helium nuclei). The newly isolated element was given a Greek name meaning unstable. Indeed, none of astatines 20 known isotopes is stable, and the longest-lived has a half-life of only 8. 3 hours. This has only added to the difficulties involved in learning about this strange element, and therefore it is difficult to say what applications, if any, astatine may have. The most promising area involves the use of astatine to treat a condition known as hyperthyroidism, related to an overly active thyroid gland.

Monday, October 14, 2019

On the Implant Communication and MAC Protocols for a WBAN

On the Implant Communication and MAC Protocols for a WBAN Abstract Recent advances in micro-electro-mechanical systems (MEMS), wireless communication, low-power intelligent sensors, and semiconductor technologies have allowed the realization of a wireless body area network (WBAN). A WBAN provides unobtrusive health monitoring for a long period of time with real-time updates to the physician. It is widely used for ubiquitous healthcare, entertainment, and military applications. The implantable and wearable medical devices have several critical requirements such as power consumption, data rate, size, and low-power medium access control (MAC) protocols. This article consists of two parts: body implant communication, which is concerned with the communication to and from a human body using RF technology, and WBAN MAC protocols, which presents several low-power MAC protocols for a WBAN with useful guidelines. In body implant communication, the in-body radio frequency (RF) performance is affected considerably by the implants depth inside the human body as well as by the muscle and fat. We observe best performance at a depth of 3cm and not close to the human skin. Furthermore, the study of low-power MAC protocols highlights the most important aspects of developing a single, a low-power, and a reliable MAC protocol for a WBAN. Keywords: In-body, on-body, RF communication, Implant, WBAN 1. Introduction Cardiovascular diseases are the foremost cause of deaths in the United States and Europe since 1900. More than ten million people are affected in Europe, one million in the US, and twenty two million people in the world [1]. The number is projected to be triple by 2020, resulting in an expenditure of around 20% of the gross domestic product (GDP). The ratio is 17% in South Korea and 39% in the UK [2]. The healthcare expenditure in the US is expected to be increased from $2.9 trillion in 2009 to $4 trillion US dollars in 2015 [3]. The impending health crisis attracts researchers, industrialists, and economists towards optimal and quick health solutions. The non-intrusive and ambulatory health monitoring of patients vital signs with real time updates of medical records via internet provide economical solutions to the health care systems. A wireless body area network (WBAN) is becoming increasingly important for healthcare systems, sporting activities, and members of emergency as well as military services. WBAN is an integration of in-body (implants) and on-body (wearable) sensors that allow inexpensive, unobtrusive, and long-term health monitoring of a patient during normal daily activities for prolonged periods of time. In-body radio frequency (RF) communications have the potential to dramatically change the future of healthcare. For example, they allow an implanted pacemaker to regularly transmit performance data and the patients health status to the physician. However, the human body poses many wireless transmission challenges. This is partially conductive and consists of materials having different dielectric constants and characteristics impedance. The interface of muscles and fats may reflect the RF wave rather than transmitting it. The key elements of an RF-linked implant are the in-body antenna and the communi cation link performance. Also, in the case of many implants and wearable sensors, a low-power MAC protocol is required to accommodate the heterogeneous traffic in a power-efficient manner. This article is divided into two parts: body implant communication and WBAN MAC protocols. In the body implant communication part, we look at the RF communication link performance at various depths inside a human (artificial) body. In the MAC part, we review the existing low-power MAC protocols and discuss their pros and cons in the context of a WBAN. We further provide alternative MAC solutions for in-body and on-body communication systems. The rest of the article is divided into three sections. In section 2, we present a discussion on body implant communication including in-body electromagnetic induction, RF communication, antenna design, and the communication link performance. Section 3 discusses several low-power MAC protocols and realizes a need for a new, a low-power, and a reliable MAC protocol for a WBAN. The final section concludes our work. 2. Body Implant Communication There are several ways to communicate with an implant that includes the use of electromagnetic induction and RF technology. Both are wireless and their use depends on the application requirements. Further, the key elements of an RF-linked implant are the in-body antenna and the communication link performance. The following part discusses in-body electromagnetic induction, RF communication, antenna design, and the communication link performance. 2.1. In-body Electromagnetic Induction Several applications still use electromagnetic coupling to provide a communication link to an implant device. In this scheme, an external coil is held very close to the body that couples to a coil implanted just below the skin surface. The implant is powered by the coupled magnetic field and requires no battery for communication. Data is transferred from the implant by altering the impedance of the implanted loop that is detected by the external coil and electronics. This type of communication is commonly used to identify animals that have been injected with an electronic tag. Electromagnetic induction is used when continuous, long-term communication is required. The base band for electromagnetic communication is typically 13.56 MHz or 28 MHz, with other frequencies also available. The choice of a particular band is subject to regulation for maximum specific absorption rate (SAR). The inductive coupling achieves best power transfer efficiency when uses large transmit and receive coil s. It, however, becomes less efficient when the space is an issue of the device is implanted deep inside the human body. Furthermore, inductive coupling technique does not support a very high data rate and cannot initiate a communication session from inside of the body. 2.2. In-body RF Communication Compared with the electromagnetic induction, RF communication dramatically increases bandwidth and supports a two-way data communication. The band designated for the in-body RF communication is medical implant communication service (MICS) band and is around 403 to 405 MHz. This band has a power limit of 25  µW in the air and is usually split into ten channels of 300 kHz bandwidth each. The human body is a medium that poses numerous wireless transmission challenges. It consists of various components that are not predictable and will change as the patient ages, gains or losses weight, or even changes posture. Values of dielectric constant (ÃŽ µr), conductivity (ÏÆ') and characteristic impedance (Zo) for some body tissue are given in table 1 [4]. This demonstrates that these two tissue types are very different. Also, the dielectric constant affects the wavelength of a signal. At 403 MHz, the wavelength in the air is 744mm, but in muscle with ÃŽ µr = 50 the wavelength reduces to 105mm, which helps in designing implanted antennas. 2.3. In-body Antenna Design A modern in-body antenna should be tuneable by using an intelligent transceiver and software routine. This enables the antenna coupling circuit to be optimised. Due to the frequency, and available volume, a non-resonant antenna is commonly used. It has a lower gain than a resonant antenna. This makes design of the antenna coupling circuit very important. Antenna options are dictated by the location of the implant. A patch antenna can be used when the implant is flat. Patch antennas are comprised of a flat insulating substrate coated on both sides with a conductor. The substrate is a body compatible material with a platinum or a platinum/iridium conductor. The upper surface is the active face and is connected to the transceiver. The connection to the transceiver needs to pass through the case where the hermetic seal is maintained, requiring a feed-through. The feed-through must have no filter capacitors present; these are common on other devices. An implanted patch antenna is electrically larger than its physical size because it is immersed in a high (ÃŽ µr) medium. It can be much larger electrically if the substrate is of higher (ÃŽ µr), such as titania or zirconia. A loop antenna can also be attached to the implant. This antenna operates mostly by the magnetic field, whereas the patch operates mostly by the electric field. The loop antenna delivers performance comparable to that of a dipole, but with a considerably smaller size. In addition, the magnetic permeability of muscle or fat is very similar to that of an air, unlike the dielectric constant that varies considerably. This property enables an antenna to be built and used with much less need for retuning. A loop antenna can be mounted on the case in a biocompatible structure. 2.4. In-body Link Performance The demonstration system consists of a base-station, an implant, antennas, and a controlling laptop. The base-station contains a printed circuit board (PCB) with a wakeup RF circuit, a Zarlink ZL70101 IC, and a micro-controller. It sends a wakeup signal on industrial, scientific, and medical (ISM) 2.4 GHz band to power up the implant to communicate. It also supports communication within the MICS band. The implant contains a Zarlink ZL70101 IC, a micro-controller, and a battery. The power limits of the wakeup signal for ISM and MICS bands transmitters are 100mW and 25  µW respectively. Experiments that measure the performance of an implant inside a living body are difficult to arrange. The alternative is to use 3D simulation software or a body phantom defined in [5]. The use of 3D simulation software is time consuming and hence practically not valuable. Therefore, measurements are generally performed using the body phantom and immersing a battery-powered implant into it [6]. Since no additional cables are attached to the test implant, the interference errors in the measurements are minimal. The body phantom is filled with a liquid that mimics the electrical properties of the human body tissues. The test environment is an anechoic chamber that includes a screened room. The interior walls of the room have sound-absorbent cones to minimize any reflections from walls or the floor that could distort the results. In real life, however, the results will be affected by the reflections from walls, desks, and other equipment and hardware. The body phantom is mounted on a woo den stand (non-conductive). The distance from the body phantom to the base-station is 3m. The MICS base-station dipole antenna is mounted on a stand. 1(a) shows the anechoic chamber with a body phantom (on the wooden stand), a log periodic test antenna (foreground), and a base-station dipole (right). The log periodic antenna is used to calculate the power radiated from the body phantom. A depth is defined as the horizontal distance between the outer skin of the phantom and the test implant. Vertical polarization of the implant is the case when the long side of the box and the patch antenna is vertical. The link performance is measured once the communication link is established. The measurements include the effective radiated power (ERP) from the implant, the received signal at the implant from the base-station, and the link quality. Measurements are made over a set distance with all the combinations of implant and test antenna polarisations, i.e., vertical-vertical (V-V), horizontal-vertical (H-V), vertical-horizontal (V-H), and horizontal-horizontal (H-H) polarisations. Typical results are shown in 1(b) where the ERP is calculated from the received signal power and the antenna characteristics. The measurement of the signal levels is done with the log periodic antenna and the spectrum analyzer. It can be seen in the that there is a significant difference in signal levels with polarisation combinations and depths. For a V-V polarisation, the ERP increases from a 1cm depth to a maximum between 2 and 7 cm, and then it decreases. The gradual increase is due to the simulated body acti ng as a parasitic antenna. The also shows how the signal level is affected by the depth with different polarisation. Such a test needs to be done with the antenna that is to be used in the final product. To measure the received signal at the implant, the Zarlink ZL70101 has an inbuilt receive signal strength indication (RSSI) function that gives a measure of the signal level detected. RSSI is a relative measurement with no calibration. The implant receives and measures a continuous wave signal transmitted by the base-station. In this case, the implant and the base-station antennas are vertically polarised. 1(c) shows an increase in the signal level at a depth between 3 and 4cm for a 15dec power. The power settings refer to the base-station and are cond to set the ERP to 25  µW. Signal levels are not valuable unless they are related to data transmission. One way to maintain the link quality is to measure the number of times the error correction is invoked during the transmission of 100 blocks of data. Two types of error correction codes, i.e., error correction code (ECC) and cyclic redundancy code (CRC) are invoked to maintain data integrity and reliability. The fewer ECC and CRC invocations result in better link quality. In 1(d), the error correction is lowest at a depth between 3 and 5 cm. A sample of ECC data collected at a 3cm implant depth is given in Table 2. The Count indicates the number of data blocks, the Time (ms) indicates the block transmission time, and the ECC indicates the number of times it is invoked. During the transmission of 100 blocks of data at a 3cm depth, the ECC is invoked 368 times, which is further equivalent to an average 3.68 times (as given in 1(d)). 2.5. Discussion The ERP, RSSI, as well as the ECC and CRC plots show that the implant demonstrates the best performance at a depth between 3 and 5 cm. The depth and position of an implant is not chosen for engineering performance but for the best clinical reasons. The implant designer must be aware of the possible losses through the human body. The attenuation and the parasitic antenna effects vary from patient to patient, with the position of the implant and with the time as the patient gains, or looses weight. Therefore, these factors need to be built into the link budget. 3. WBAN MAC Protocols Some of the common objectives in a WBAN are to achieve maximum throughput, minimum delay, and to maximize the network lifetime by controlling the main sources of energy waste, i.e., collision, idle listening, overhearing, and control packet overhead. A collision occurs when more than one packet transmits data at the same time. The collided packets have to be retransmitted, which consumes extra energy. The second source of energy waste is idle listening, meaning that a node listens to an idle channel to receive data. The third source is overhearing, i.e., to receive packets that are destined to other nodes. The last source is control packet overhead, meaning that control information area added to the payload. Minimal number of control packets should be used for data transmission. Generally MAC protocols are grouped into contention-based and schedule-based MAC protocols. In contention-based MAC protocols such as carrier sense multiple access/collision avoidance (CSMA/CA) protocols, nodes contend for the channel to transmit data. If the channel is busy, the node defers its transmission until it becomes idle. These protocols are scalable with no strict time synchronization constraint. However, they incur significant protocol overhead. In schedule-based protocols such as time division multiple access (TDMA) protocols, the channel is divided into time slots of fixed or variable duration. These slots are assigned to nodes and each node transmits during its slot period. These protocols are energy conserving protocols. Since the duty cycle of radio is reduced, there is no contention, idle listening and overhearing problems. But these protocols require frequent synchronization. Table 3 compares CSMA/CA and TDMA protocols. 3.1. WBAN MAC Requirements The most important attribute of a good MAC protocol for a WBAN is energy efficiency. In some applications, the device should support a battery life of months or years without interventions, while others may require a battery life of tens of hours due to the nature of the applications. For example, cardiac defibrillators and pacemakers should have a lifetime of more than 5 years, while swallowable camera pills have a lifetime of 12 hours. Power-efficient and flexible duty cycling techniques are required to minimize the idle listening, overhearing, packet collisions and control packet overhead. Furthermore, low duty cycle nodes should not receive frequent synchronization and control information (beacon frames) if they have no data to send or receive. The WBAN MAC should also support simultaneous operation on in-body (MICS) and on-body channels (ISM or UWB) at the same time. In other words, it should support multiple physical layer (Multi-PHYs) communication or MAC transparency. Other important factors are scalability and adaptability to changes in the network, delay, throughput, and bandwidth utilization. Changes in the network topology, the position of the human body, and the node density should be handled rapidly and successfully. The MAC protocol for a WBAN should consider the electrical properties of the human body and the diverse traffic nature of in-body and on-body nodes. For example, the data rate of in-body nodes varies, ranging from few kbps in pacemaker to several Mbps in capsular endoscope. In the following sections, we discuss proposed MAC protocols for a WBAN with useful guidelines. We also present a case study of IEEE 802.15.4, PB-TDMA, and S-MAC protocols for a WBAN using NS2 simulator. 3.2. Proposed MAC Protocols for a WBAN In this section, we study proposed MAC protocols for a WBAN followed by useful suggestions/comments. Many of the proposed MAC protocols are the extension of existing MAC protocols originally proposed for wireless sensor networks (WSNs). 3.2.1. IEEE 802.15.4 IEEE 802.15.4 has remained the main focus of many researchers during the past few years. Some of the main reasons of selecting IEEE 802.15.4 for a WBAN were low-power communication and support of low data rate WBAN applications. Nicolas et.al investigated the performance of a non-beacon IEEE 802.15.4 in [7], where low upload/download rates (mostly per hour) are considered. They concluded that the non-beacon IEEE 802.15.4 results in 10 to 15 years sensor lifetime for low data rate and asymmetric WBAN traffic. However, their work considers data transmission on the basis of periodic intervals which is not a perfect scenario in a real WBAN. Furthermore, the data rate of in-body and on-body nodes are not always low, i.e., it ranges from 10 Kbps to 10 Mbps, and hence reduces the lifetime of the sensor nodes. Li et.al studied the behavior of slotted and unslotted CSMA/CA mechanisms and concluded that the unslotted mechanism performs better than the slotted one in terms of throughput and lat ency but with high cost of power consumption [8]. Intel Corporation conducted a series of experiments to analyze the performance of IEEE 802.15.4 for a WBAN [9]. They deployed a number of Intel Mote 2 [10] nodes on chest, waist, and the right ankle. Table 4 shows the throughput at a 0dBm transmit power when a person is standing and sitting on a chair. The connection between ankle and waist cannot be established, even for a short distance of 1.5m. All other connections show favourable performance. Dave et al. studied the energy efficiency and QoS performance of IEEE 802.15.4 and IEEE 802.11e [11] MAC protocols under two generic applications: a wave-form real time stream and a real-time parameter measurement stream [12]. Table 5 shows the throughput and the Power (in mW) for both applications. The AC_BE and AC_VO represent the access categories voice and best-effort in the IEEE 802.11e. Since the IEEE 802.15.4 operates in the 2.4 GHz unlicensed band, the possibilities of interference from other devices such as IEEE 802.11 and microwave are inevitable. A series of experiments to evaluate the impact of IEEE 802.11 and microwave ovens on the IEEE 802.15.4 transmission are carried out in [13]. The authors considered XBee 802.15.4 development kit that has two XBee modules. Table 6 shows the affects of microwave oven on the XBee remote module. When the microwave oven is ON, the packet success rate and the standard deviation is degraded to 96.85% and 3.22% respectively. However, there is no loss when the XBee modules are taken 2 meters away from the microwave oven. 3.2.2. Heartbeat Driven MAC Protocol (H-MAC) A Heartbeat Driven MAC protocol (H-MAC) [14] is a TDMA-based protocol originally proposed for a star topology WBAN. The energy efficiency is improved by exploiting heartbeat rhythm information in order to synchronize the nodes. The nodes do not need to receive periodic information to perform synchronization. The heartbeat rhythm can be extracted from the sensory data and hence all the rhythms represented by peak sequences are naturally synchronized. The H-MAC protocol assigns dedicated time slots to each node to guarantee collision-free transmission. In addition, this protocol is supported by an active synchronization recovery scheme where two resynchronization schemes are implemented. Although H-MAC protocol reduces the extra energy cost required for synchronization, it does not support sporadic events. Since the TDMA slots are dedicated and not traffic adaptive, H-MAC protocol encounters low spectral/bandwidth efficiency in case of a low traffic. For example, a blood pressure node may not need a dedicated time slot while an endoscope pill may require a number of dedicated time slots when deployed in a WBAN. But the slots should be released when the endoscope pill is expelled. The heartbeat rhythm information varies depending on the patient condition. It may not reveal valid information for synchronization all the time. One of the solutions is to assign the time slots based on the nodes traffic information and to receive synchronization packets when required, i.e., when a node has data to transmit/receive. 3.2.3. Reservation-based Dynamic TDMA Protocol (DTDMA) A Reservation-based Dynamic TDMA Protocol (DTDMA) [15] is originally proposed for a normal (periodic) WBAN traffic where slots are allocated to the nodes which have buffered packets and are released to other nodes when the data transmission/reception is completed. The channel is bounded by superframe structures. Each superframe consists of a beacon used to carry control information including slot allocation information, a CFP period a configurable period used for data transmission, a CAP period a fixed period used for short command packets using slotted aloha protocol, and a configurable inactive period used to save energy. Unlike a beacon-enabled IEEE 802.15.4 superframe structure where the CAP duration is followed by CFP duration, in DTDMA protocol the CFP duration is followed by CAP duration in order to enable the nodes to send CFP traffic earlier than CAP traffic. In addition, the duration of inactive period is configurable based on the CFP slot duration. If there is no CFP t raffic, the inactive period will be increased. The DTDMA superframe structure is given in 2(a). It has been shown that for a normal (periodic) traffic, the DTDMA protocol provides more dependability in terms of low packet dropping rate and low energy consumption when compared with IEEE 802.15.4. However, it does not support emergency and on-demand traffic. Although the slot allocation based on the traffic information is a good approach, the DTDMA protocol has several limitations when considered for the MICS band. The MICS band has ten channels where each channel has 300 Kbps bandwidth. The DTDMA protocol is valid only for one channel and cannot operate on ten channels simultaneously. In addition, the DTDMA protocol does not support the channel allocation mechanism in the MICS band. This protocol can be further investigated for the MICS band by integrating the channel information in the beacon frame. The new concept may be called Frequency-based DTDMA (F-DTDMA), i.e., the coordinator first selects one of the channels in the MICS band and then divides the selected channel in TDMA superframe (s) according to the DTDMA protocol. However the FCC has imposed several restrictions on the channel selection/allocation mechanism in the MICS band, which further creates problems for the MAC designers. 3.2.4. BodyMAC Protocol A BodyMAC protocol is a TDMA-based protocol where the channel is bounded by TDMA superframe structures with downlink and uplink subframes as given in 2(b) [16]. The downlink frame is used to accommodate the on-demand traffic and the uplink frame is used to accommodate the normal traffic. There is no proper mechanism to handle the emergency traffic. The uplink frame is further divided into CAP and CFP periods. The CAP period is used to transmit small size MAC packets. The CFP period is used to transmit the normal data in a TDMA slot. The duration of the downlink and uplink superframes are defined by the coordinator. The advantage of the BodyMAC protocol is that it accommodates the on-demand traffic using the downlink subframe. However, in case of low-power implants (which should not receive beacons periodically), the coordinator has to wake up the implant first and then send synchronization packets. After synchronization, the coordinator can request/send data in the downlink subframe. The wake up procedure for low-power implants is not defined in the BodyMAC protocol. One of the solutions is to use a wakeup radio in order to wake up low-power implants before using the downlink subframe. In addition the wakeup packets can be used to carry control information such as channel (MICS band) and slot allocation information from the coordinator to the nodes. Finally, the BodyMAC protocol uses the CSMA/CA protocol in the CAP period which is not reliable for a WBAN. This should be replaced by slotted-ALOHA as done in DTDMA. Further details on low-power MAC protocols (originally proposed for WSNs) for a WBAN are given in Appendix I. 3.3. Case Study: IEEE 802.15.4, PB-TDMA, and SMAC Protocols for a WBAN In this section, we investigate the performance of a beacon-enabled IEEE 802.15.4, preamble-based TDMA [17], and SMAC protocols for an on-body communication system. Our analysis is verified by extensive simulations using NS-2. The wireless physical parameters are considered according to a low-power Nordic nRF2401 transceiver (Chipcon CC2420 radio [18] is considered in case of IEEE 802.15.4) [19]. This radio transceiver operates in the 2.4-2.5 GHz band with an optimum transmission power of -5dBm. We use the shadowing propagation model throughout the simulations. We consider a total of 7 nodes firmly placed on a human body. The nodes are connected to the coordinator in a star topology. The distribution of the nodes and the coordinator is given in 3(a). The initial nodes energy is 5 Joules. The packet size is 120 bytes. The average data transmission rate of ECG, EEG, and EMG is 10, 70, and 100 kbps. The transport agent is a user datagram protocol (UDP). Since the traffic is an uplink t raffic, the buffer size at the coordinator is considered unlimited. In a real WBAN, the buffer size should be configurable based on the application requirements. For energy calculation, we use the existing energy model defined in NS-2. The simulation area is 33 meter and each node generates constant bit rate (CBR) traffic. The CBR traffic is an ideal model for some of the medical applications, where the nodes send data based on pre-defined traffic patterns. However, most of the nodes in a WBAN have heterogeneous traffic characteristics and they generate periodic and aperiodic traffic. In this case, they will have many traffic models operating at the same time, ranging from CBR to variable bit rate (VBR). 3(b) shows the throughput of the IEEE 802.15.4, PB-TDMA, and S-MAC protocols. The performance of the IEEE 802.15.4, when cond in a beacon-enabled mode, outperforms PB-TDMA and S-MAC protocols. The efficiency of a MAC protocol depends on the traffic pattern. In this case, S-MAC protocol results poor performance because the traffic scenario that we generated is not an ideal scenario for the S-MAC. 3(c) shows the residual energy at various nodes during simulation time. When nodes finish their transmission, they go into sleep mode, as indicated by the horizontal line. The coordinator has a considerable energy loss because it always listens to the other nodes. However, the energy consumption of the coordinator is not a critical issue in a WBAN. We further analyze the residual energy at the ECG node for different transmission powers. There is a minor change in energy loss for three different transmission powers as given in 3(d). This concludes that reducing the transmission power only d oes not save energy unless supported by an efficient power management scheme. The IEEE 802.15.4 can be considered for certain on-body medical applications, but it does not achieve the level of power required for in-body nodes. It is not sufficient for high data rate medical and non-medical applications due to its limitations to 250 kbps. Furthermore, it uses slotted or unslotted CSMA/CA where the nodes are required to sense the channel before transmission. However, the channel sensing is not guaranteed in MICS band because the path loss inside the human body due to tissue heating is much higher than in free space. Bin et.al studied the clear channel assessment (CCA) range of in-body nodes which is only 0.5 meters [20]. This unreliability in CCA indicates that CSMA/CA is not an ideal technique for the in-body communication system. An alternative approach is to use a TDMA-based protocol that contains a beacon, a configurable contention access period (CCAP), and a contention free period (CFP) [21]. Unlike the IEEE 802.15.4, this protocol is required to use a slot ted-ALOHA protocol in the CCAP instead of CSMA/CA. The CCAP period should contain few slots (3 or 4) of equal duration and can be used for short data transmission and a guaranteed time slot (GTS) allocation. To enable a logical connection between the in-body and the on-body communication systems, a method called bridging function can be used as discussed in [21]. The bridging function can integrate in-body and on-body nodes into a WBAN, thus satisfying the MAC transparency requirement. Further details about bridging function are given in [22]. 3.4. Discussion Since the CSMA/CA is not suitable due to unreliable CCA and heavy collision problems, it can be seen that the most reliable power-efficient protocol is a TDMA-based protocol. Many protocols have been proposed for a WBAN and most of them are based on a TDMA-based mechanism. However, all of them have pros and cons for a real WBAN system that should operate on Multi-PHYs (MICS, ISM, and UWB) simultaneously. The MAC transparency has been a hot topic for the MAC designers since different bands have different characteristics in terms of data rate, number of channels in a particular frequency band, and data prioritization. A good MAC protocol should enable reliable operation on MICS, ISM, and UWB etc bands simultaneously. The main problems are related to MICS band due to FCC restrictions [23]. According to FCC, â€Å"Within 5 seconds prior to initiating a communications session, circuitry associated with a medical implant programmer/control transmitter must monitor the channel or channels the MICS system devices intend to occupy for a minimum of 10 milliseconds per channel.† In other words, the coordinator must perform Listen-before-talking (LBT) criteria prior to a MICS communication sessions. The implants are not allowed to On the Implant Communication and MAC Protocols for a WBAN On the Implant Communication and MAC Protocols for a WBAN Abstract Recent advances in micro-electro-mechanical systems (MEMS), wireless communication, low-power intelligent sensors, and semiconductor technologies have allowed the realization of a wireless body area network (WBAN). A WBAN provides unobtrusive health monitoring for a long period of time with real-time updates to the physician. It is widely used for ubiquitous healthcare, entertainment, and military applications. The implantable and wearable medical devices have several critical requirements such as power consumption, data rate, size, and low-power medium access control (MAC) protocols. This article consists of two parts: body implant communication, which is concerned with the communication to and from a human body using RF technology, and WBAN MAC protocols, which presents several low-power MAC protocols for a WBAN with useful guidelines. In body implant communication, the in-body radio frequency (RF) performance is affected considerably by the implants depth inside the human body as well as by the muscle and fat. We observe best performance at a depth of 3cm and not close to the human skin. Furthermore, the study of low-power MAC protocols highlights the most important aspects of developing a single, a low-power, and a reliable MAC protocol for a WBAN. Keywords: In-body, on-body, RF communication, Implant, WBAN 1. Introduction Cardiovascular diseases are the foremost cause of deaths in the United States and Europe since 1900. More than ten million people are affected in Europe, one million in the US, and twenty two million people in the world [1]. The number is projected to be triple by 2020, resulting in an expenditure of around 20% of the gross domestic product (GDP). The ratio is 17% in South Korea and 39% in the UK [2]. The healthcare expenditure in the US is expected to be increased from $2.9 trillion in 2009 to $4 trillion US dollars in 2015 [3]. The impending health crisis attracts researchers, industrialists, and economists towards optimal and quick health solutions. The non-intrusive and ambulatory health monitoring of patients vital signs with real time updates of medical records via internet provide economical solutions to the health care systems. A wireless body area network (WBAN) is becoming increasingly important for healthcare systems, sporting activities, and members of emergency as well as military services. WBAN is an integration of in-body (implants) and on-body (wearable) sensors that allow inexpensive, unobtrusive, and long-term health monitoring of a patient during normal daily activities for prolonged periods of time. In-body radio frequency (RF) communications have the potential to dramatically change the future of healthcare. For example, they allow an implanted pacemaker to regularly transmit performance data and the patients health status to the physician. However, the human body poses many wireless transmission challenges. This is partially conductive and consists of materials having different dielectric constants and characteristics impedance. The interface of muscles and fats may reflect the RF wave rather than transmitting it. The key elements of an RF-linked implant are the in-body antenna and the communi cation link performance. Also, in the case of many implants and wearable sensors, a low-power MAC protocol is required to accommodate the heterogeneous traffic in a power-efficient manner. This article is divided into two parts: body implant communication and WBAN MAC protocols. In the body implant communication part, we look at the RF communication link performance at various depths inside a human (artificial) body. In the MAC part, we review the existing low-power MAC protocols and discuss their pros and cons in the context of a WBAN. We further provide alternative MAC solutions for in-body and on-body communication systems. The rest of the article is divided into three sections. In section 2, we present a discussion on body implant communication including in-body electromagnetic induction, RF communication, antenna design, and the communication link performance. Section 3 discusses several low-power MAC protocols and realizes a need for a new, a low-power, and a reliable MAC protocol for a WBAN. The final section concludes our work. 2. Body Implant Communication There are several ways to communicate with an implant that includes the use of electromagnetic induction and RF technology. Both are wireless and their use depends on the application requirements. Further, the key elements of an RF-linked implant are the in-body antenna and the communication link performance. The following part discusses in-body electromagnetic induction, RF communication, antenna design, and the communication link performance. 2.1. In-body Electromagnetic Induction Several applications still use electromagnetic coupling to provide a communication link to an implant device. In this scheme, an external coil is held very close to the body that couples to a coil implanted just below the skin surface. The implant is powered by the coupled magnetic field and requires no battery for communication. Data is transferred from the implant by altering the impedance of the implanted loop that is detected by the external coil and electronics. This type of communication is commonly used to identify animals that have been injected with an electronic tag. Electromagnetic induction is used when continuous, long-term communication is required. The base band for electromagnetic communication is typically 13.56 MHz or 28 MHz, with other frequencies also available. The choice of a particular band is subject to regulation for maximum specific absorption rate (SAR). The inductive coupling achieves best power transfer efficiency when uses large transmit and receive coil s. It, however, becomes less efficient when the space is an issue of the device is implanted deep inside the human body. Furthermore, inductive coupling technique does not support a very high data rate and cannot initiate a communication session from inside of the body. 2.2. In-body RF Communication Compared with the electromagnetic induction, RF communication dramatically increases bandwidth and supports a two-way data communication. The band designated for the in-body RF communication is medical implant communication service (MICS) band and is around 403 to 405 MHz. This band has a power limit of 25  µW in the air and is usually split into ten channels of 300 kHz bandwidth each. The human body is a medium that poses numerous wireless transmission challenges. It consists of various components that are not predictable and will change as the patient ages, gains or losses weight, or even changes posture. Values of dielectric constant (ÃŽ µr), conductivity (ÏÆ') and characteristic impedance (Zo) for some body tissue are given in table 1 [4]. This demonstrates that these two tissue types are very different. Also, the dielectric constant affects the wavelength of a signal. At 403 MHz, the wavelength in the air is 744mm, but in muscle with ÃŽ µr = 50 the wavelength reduces to 105mm, which helps in designing implanted antennas. 2.3. In-body Antenna Design A modern in-body antenna should be tuneable by using an intelligent transceiver and software routine. This enables the antenna coupling circuit to be optimised. Due to the frequency, and available volume, a non-resonant antenna is commonly used. It has a lower gain than a resonant antenna. This makes design of the antenna coupling circuit very important. Antenna options are dictated by the location of the implant. A patch antenna can be used when the implant is flat. Patch antennas are comprised of a flat insulating substrate coated on both sides with a conductor. The substrate is a body compatible material with a platinum or a platinum/iridium conductor. The upper surface is the active face and is connected to the transceiver. The connection to the transceiver needs to pass through the case where the hermetic seal is maintained, requiring a feed-through. The feed-through must have no filter capacitors present; these are common on other devices. An implanted patch antenna is electrically larger than its physical size because it is immersed in a high (ÃŽ µr) medium. It can be much larger electrically if the substrate is of higher (ÃŽ µr), such as titania or zirconia. A loop antenna can also be attached to the implant. This antenna operates mostly by the magnetic field, whereas the patch operates mostly by the electric field. The loop antenna delivers performance comparable to that of a dipole, but with a considerably smaller size. In addition, the magnetic permeability of muscle or fat is very similar to that of an air, unlike the dielectric constant that varies considerably. This property enables an antenna to be built and used with much less need for retuning. A loop antenna can be mounted on the case in a biocompatible structure. 2.4. In-body Link Performance The demonstration system consists of a base-station, an implant, antennas, and a controlling laptop. The base-station contains a printed circuit board (PCB) with a wakeup RF circuit, a Zarlink ZL70101 IC, and a micro-controller. It sends a wakeup signal on industrial, scientific, and medical (ISM) 2.4 GHz band to power up the implant to communicate. It also supports communication within the MICS band. The implant contains a Zarlink ZL70101 IC, a micro-controller, and a battery. The power limits of the wakeup signal for ISM and MICS bands transmitters are 100mW and 25  µW respectively. Experiments that measure the performance of an implant inside a living body are difficult to arrange. The alternative is to use 3D simulation software or a body phantom defined in [5]. The use of 3D simulation software is time consuming and hence practically not valuable. Therefore, measurements are generally performed using the body phantom and immersing a battery-powered implant into it [6]. Since no additional cables are attached to the test implant, the interference errors in the measurements are minimal. The body phantom is filled with a liquid that mimics the electrical properties of the human body tissues. The test environment is an anechoic chamber that includes a screened room. The interior walls of the room have sound-absorbent cones to minimize any reflections from walls or the floor that could distort the results. In real life, however, the results will be affected by the reflections from walls, desks, and other equipment and hardware. The body phantom is mounted on a woo den stand (non-conductive). The distance from the body phantom to the base-station is 3m. The MICS base-station dipole antenna is mounted on a stand. 1(a) shows the anechoic chamber with a body phantom (on the wooden stand), a log periodic test antenna (foreground), and a base-station dipole (right). The log periodic antenna is used to calculate the power radiated from the body phantom. A depth is defined as the horizontal distance between the outer skin of the phantom and the test implant. Vertical polarization of the implant is the case when the long side of the box and the patch antenna is vertical. The link performance is measured once the communication link is established. The measurements include the effective radiated power (ERP) from the implant, the received signal at the implant from the base-station, and the link quality. Measurements are made over a set distance with all the combinations of implant and test antenna polarisations, i.e., vertical-vertical (V-V), horizontal-vertical (H-V), vertical-horizontal (V-H), and horizontal-horizontal (H-H) polarisations. Typical results are shown in 1(b) where the ERP is calculated from the received signal power and the antenna characteristics. The measurement of the signal levels is done with the log periodic antenna and the spectrum analyzer. It can be seen in the that there is a significant difference in signal levels with polarisation combinations and depths. For a V-V polarisation, the ERP increases from a 1cm depth to a maximum between 2 and 7 cm, and then it decreases. The gradual increase is due to the simulated body acti ng as a parasitic antenna. The also shows how the signal level is affected by the depth with different polarisation. Such a test needs to be done with the antenna that is to be used in the final product. To measure the received signal at the implant, the Zarlink ZL70101 has an inbuilt receive signal strength indication (RSSI) function that gives a measure of the signal level detected. RSSI is a relative measurement with no calibration. The implant receives and measures a continuous wave signal transmitted by the base-station. In this case, the implant and the base-station antennas are vertically polarised. 1(c) shows an increase in the signal level at a depth between 3 and 4cm for a 15dec power. The power settings refer to the base-station and are cond to set the ERP to 25  µW. Signal levels are not valuable unless they are related to data transmission. One way to maintain the link quality is to measure the number of times the error correction is invoked during the transmission of 100 blocks of data. Two types of error correction codes, i.e., error correction code (ECC) and cyclic redundancy code (CRC) are invoked to maintain data integrity and reliability. The fewer ECC and CRC invocations result in better link quality. In 1(d), the error correction is lowest at a depth between 3 and 5 cm. A sample of ECC data collected at a 3cm implant depth is given in Table 2. The Count indicates the number of data blocks, the Time (ms) indicates the block transmission time, and the ECC indicates the number of times it is invoked. During the transmission of 100 blocks of data at a 3cm depth, the ECC is invoked 368 times, which is further equivalent to an average 3.68 times (as given in 1(d)). 2.5. Discussion The ERP, RSSI, as well as the ECC and CRC plots show that the implant demonstrates the best performance at a depth between 3 and 5 cm. The depth and position of an implant is not chosen for engineering performance but for the best clinical reasons. The implant designer must be aware of the possible losses through the human body. The attenuation and the parasitic antenna effects vary from patient to patient, with the position of the implant and with the time as the patient gains, or looses weight. Therefore, these factors need to be built into the link budget. 3. WBAN MAC Protocols Some of the common objectives in a WBAN are to achieve maximum throughput, minimum delay, and to maximize the network lifetime by controlling the main sources of energy waste, i.e., collision, idle listening, overhearing, and control packet overhead. A collision occurs when more than one packet transmits data at the same time. The collided packets have to be retransmitted, which consumes extra energy. The second source of energy waste is idle listening, meaning that a node listens to an idle channel to receive data. The third source is overhearing, i.e., to receive packets that are destined to other nodes. The last source is control packet overhead, meaning that control information area added to the payload. Minimal number of control packets should be used for data transmission. Generally MAC protocols are grouped into contention-based and schedule-based MAC protocols. In contention-based MAC protocols such as carrier sense multiple access/collision avoidance (CSMA/CA) protocols, nodes contend for the channel to transmit data. If the channel is busy, the node defers its transmission until it becomes idle. These protocols are scalable with no strict time synchronization constraint. However, they incur significant protocol overhead. In schedule-based protocols such as time division multiple access (TDMA) protocols, the channel is divided into time slots of fixed or variable duration. These slots are assigned to nodes and each node transmits during its slot period. These protocols are energy conserving protocols. Since the duty cycle of radio is reduced, there is no contention, idle listening and overhearing problems. But these protocols require frequent synchronization. Table 3 compares CSMA/CA and TDMA protocols. 3.1. WBAN MAC Requirements The most important attribute of a good MAC protocol for a WBAN is energy efficiency. In some applications, the device should support a battery life of months or years without interventions, while others may require a battery life of tens of hours due to the nature of the applications. For example, cardiac defibrillators and pacemakers should have a lifetime of more than 5 years, while swallowable camera pills have a lifetime of 12 hours. Power-efficient and flexible duty cycling techniques are required to minimize the idle listening, overhearing, packet collisions and control packet overhead. Furthermore, low duty cycle nodes should not receive frequent synchronization and control information (beacon frames) if they have no data to send or receive. The WBAN MAC should also support simultaneous operation on in-body (MICS) and on-body channels (ISM or UWB) at the same time. In other words, it should support multiple physical layer (Multi-PHYs) communication or MAC transparency. Other important factors are scalability and adaptability to changes in the network, delay, throughput, and bandwidth utilization. Changes in the network topology, the position of the human body, and the node density should be handled rapidly and successfully. The MAC protocol for a WBAN should consider the electrical properties of the human body and the diverse traffic nature of in-body and on-body nodes. For example, the data rate of in-body nodes varies, ranging from few kbps in pacemaker to several Mbps in capsular endoscope. In the following sections, we discuss proposed MAC protocols for a WBAN with useful guidelines. We also present a case study of IEEE 802.15.4, PB-TDMA, and S-MAC protocols for a WBAN using NS2 simulator. 3.2. Proposed MAC Protocols for a WBAN In this section, we study proposed MAC protocols for a WBAN followed by useful suggestions/comments. Many of the proposed MAC protocols are the extension of existing MAC protocols originally proposed for wireless sensor networks (WSNs). 3.2.1. IEEE 802.15.4 IEEE 802.15.4 has remained the main focus of many researchers during the past few years. Some of the main reasons of selecting IEEE 802.15.4 for a WBAN were low-power communication and support of low data rate WBAN applications. Nicolas et.al investigated the performance of a non-beacon IEEE 802.15.4 in [7], where low upload/download rates (mostly per hour) are considered. They concluded that the non-beacon IEEE 802.15.4 results in 10 to 15 years sensor lifetime for low data rate and asymmetric WBAN traffic. However, their work considers data transmission on the basis of periodic intervals which is not a perfect scenario in a real WBAN. Furthermore, the data rate of in-body and on-body nodes are not always low, i.e., it ranges from 10 Kbps to 10 Mbps, and hence reduces the lifetime of the sensor nodes. Li et.al studied the behavior of slotted and unslotted CSMA/CA mechanisms and concluded that the unslotted mechanism performs better than the slotted one in terms of throughput and lat ency but with high cost of power consumption [8]. Intel Corporation conducted a series of experiments to analyze the performance of IEEE 802.15.4 for a WBAN [9]. They deployed a number of Intel Mote 2 [10] nodes on chest, waist, and the right ankle. Table 4 shows the throughput at a 0dBm transmit power when a person is standing and sitting on a chair. The connection between ankle and waist cannot be established, even for a short distance of 1.5m. All other connections show favourable performance. Dave et al. studied the energy efficiency and QoS performance of IEEE 802.15.4 and IEEE 802.11e [11] MAC protocols under two generic applications: a wave-form real time stream and a real-time parameter measurement stream [12]. Table 5 shows the throughput and the Power (in mW) for both applications. The AC_BE and AC_VO represent the access categories voice and best-effort in the IEEE 802.11e. Since the IEEE 802.15.4 operates in the 2.4 GHz unlicensed band, the possibilities of interference from other devices such as IEEE 802.11 and microwave are inevitable. A series of experiments to evaluate the impact of IEEE 802.11 and microwave ovens on the IEEE 802.15.4 transmission are carried out in [13]. The authors considered XBee 802.15.4 development kit that has two XBee modules. Table 6 shows the affects of microwave oven on the XBee remote module. When the microwave oven is ON, the packet success rate and the standard deviation is degraded to 96.85% and 3.22% respectively. However, there is no loss when the XBee modules are taken 2 meters away from the microwave oven. 3.2.2. Heartbeat Driven MAC Protocol (H-MAC) A Heartbeat Driven MAC protocol (H-MAC) [14] is a TDMA-based protocol originally proposed for a star topology WBAN. The energy efficiency is improved by exploiting heartbeat rhythm information in order to synchronize the nodes. The nodes do not need to receive periodic information to perform synchronization. The heartbeat rhythm can be extracted from the sensory data and hence all the rhythms represented by peak sequences are naturally synchronized. The H-MAC protocol assigns dedicated time slots to each node to guarantee collision-free transmission. In addition, this protocol is supported by an active synchronization recovery scheme where two resynchronization schemes are implemented. Although H-MAC protocol reduces the extra energy cost required for synchronization, it does not support sporadic events. Since the TDMA slots are dedicated and not traffic adaptive, H-MAC protocol encounters low spectral/bandwidth efficiency in case of a low traffic. For example, a blood pressure node may not need a dedicated time slot while an endoscope pill may require a number of dedicated time slots when deployed in a WBAN. But the slots should be released when the endoscope pill is expelled. The heartbeat rhythm information varies depending on the patient condition. It may not reveal valid information for synchronization all the time. One of the solutions is to assign the time slots based on the nodes traffic information and to receive synchronization packets when required, i.e., when a node has data to transmit/receive. 3.2.3. Reservation-based Dynamic TDMA Protocol (DTDMA) A Reservation-based Dynamic TDMA Protocol (DTDMA) [15] is originally proposed for a normal (periodic) WBAN traffic where slots are allocated to the nodes which have buffered packets and are released to other nodes when the data transmission/reception is completed. The channel is bounded by superframe structures. Each superframe consists of a beacon used to carry control information including slot allocation information, a CFP period a configurable period used for data transmission, a CAP period a fixed period used for short command packets using slotted aloha protocol, and a configurable inactive period used to save energy. Unlike a beacon-enabled IEEE 802.15.4 superframe structure where the CAP duration is followed by CFP duration, in DTDMA protocol the CFP duration is followed by CAP duration in order to enable the nodes to send CFP traffic earlier than CAP traffic. In addition, the duration of inactive period is configurable based on the CFP slot duration. If there is no CFP t raffic, the inactive period will be increased. The DTDMA superframe structure is given in 2(a). It has been shown that for a normal (periodic) traffic, the DTDMA protocol provides more dependability in terms of low packet dropping rate and low energy consumption when compared with IEEE 802.15.4. However, it does not support emergency and on-demand traffic. Although the slot allocation based on the traffic information is a good approach, the DTDMA protocol has several limitations when considered for the MICS band. The MICS band has ten channels where each channel has 300 Kbps bandwidth. The DTDMA protocol is valid only for one channel and cannot operate on ten channels simultaneously. In addition, the DTDMA protocol does not support the channel allocation mechanism in the MICS band. This protocol can be further investigated for the MICS band by integrating the channel information in the beacon frame. The new concept may be called Frequency-based DTDMA (F-DTDMA), i.e., the coordinator first selects one of the channels in the MICS band and then divides the selected channel in TDMA superframe (s) according to the DTDMA protocol. However the FCC has imposed several restrictions on the channel selection/allocation mechanism in the MICS band, which further creates problems for the MAC designers. 3.2.4. BodyMAC Protocol A BodyMAC protocol is a TDMA-based protocol where the channel is bounded by TDMA superframe structures with downlink and uplink subframes as given in 2(b) [16]. The downlink frame is used to accommodate the on-demand traffic and the uplink frame is used to accommodate the normal traffic. There is no proper mechanism to handle the emergency traffic. The uplink frame is further divided into CAP and CFP periods. The CAP period is used to transmit small size MAC packets. The CFP period is used to transmit the normal data in a TDMA slot. The duration of the downlink and uplink superframes are defined by the coordinator. The advantage of the BodyMAC protocol is that it accommodates the on-demand traffic using the downlink subframe. However, in case of low-power implants (which should not receive beacons periodically), the coordinator has to wake up the implant first and then send synchronization packets. After synchronization, the coordinator can request/send data in the downlink subframe. The wake up procedure for low-power implants is not defined in the BodyMAC protocol. One of the solutions is to use a wakeup radio in order to wake up low-power implants before using the downlink subframe. In addition the wakeup packets can be used to carry control information such as channel (MICS band) and slot allocation information from the coordinator to the nodes. Finally, the BodyMAC protocol uses the CSMA/CA protocol in the CAP period which is not reliable for a WBAN. This should be replaced by slotted-ALOHA as done in DTDMA. Further details on low-power MAC protocols (originally proposed for WSNs) for a WBAN are given in Appendix I. 3.3. Case Study: IEEE 802.15.4, PB-TDMA, and SMAC Protocols for a WBAN In this section, we investigate the performance of a beacon-enabled IEEE 802.15.4, preamble-based TDMA [17], and SMAC protocols for an on-body communication system. Our analysis is verified by extensive simulations using NS-2. The wireless physical parameters are considered according to a low-power Nordic nRF2401 transceiver (Chipcon CC2420 radio [18] is considered in case of IEEE 802.15.4) [19]. This radio transceiver operates in the 2.4-2.5 GHz band with an optimum transmission power of -5dBm. We use the shadowing propagation model throughout the simulations. We consider a total of 7 nodes firmly placed on a human body. The nodes are connected to the coordinator in a star topology. The distribution of the nodes and the coordinator is given in 3(a). The initial nodes energy is 5 Joules. The packet size is 120 bytes. The average data transmission rate of ECG, EEG, and EMG is 10, 70, and 100 kbps. The transport agent is a user datagram protocol (UDP). Since the traffic is an uplink t raffic, the buffer size at the coordinator is considered unlimited. In a real WBAN, the buffer size should be configurable based on the application requirements. For energy calculation, we use the existing energy model defined in NS-2. The simulation area is 33 meter and each node generates constant bit rate (CBR) traffic. The CBR traffic is an ideal model for some of the medical applications, where the nodes send data based on pre-defined traffic patterns. However, most of the nodes in a WBAN have heterogeneous traffic characteristics and they generate periodic and aperiodic traffic. In this case, they will have many traffic models operating at the same time, ranging from CBR to variable bit rate (VBR). 3(b) shows the throughput of the IEEE 802.15.4, PB-TDMA, and S-MAC protocols. The performance of the IEEE 802.15.4, when cond in a beacon-enabled mode, outperforms PB-TDMA and S-MAC protocols. The efficiency of a MAC protocol depends on the traffic pattern. In this case, S-MAC protocol results poor performance because the traffic scenario that we generated is not an ideal scenario for the S-MAC. 3(c) shows the residual energy at various nodes during simulation time. When nodes finish their transmission, they go into sleep mode, as indicated by the horizontal line. The coordinator has a considerable energy loss because it always listens to the other nodes. However, the energy consumption of the coordinator is not a critical issue in a WBAN. We further analyze the residual energy at the ECG node for different transmission powers. There is a minor change in energy loss for three different transmission powers as given in 3(d). This concludes that reducing the transmission power only d oes not save energy unless supported by an efficient power management scheme. The IEEE 802.15.4 can be considered for certain on-body medical applications, but it does not achieve the level of power required for in-body nodes. It is not sufficient for high data rate medical and non-medical applications due to its limitations to 250 kbps. Furthermore, it uses slotted or unslotted CSMA/CA where the nodes are required to sense the channel before transmission. However, the channel sensing is not guaranteed in MICS band because the path loss inside the human body due to tissue heating is much higher than in free space. Bin et.al studied the clear channel assessment (CCA) range of in-body nodes which is only 0.5 meters [20]. This unreliability in CCA indicates that CSMA/CA is not an ideal technique for the in-body communication system. An alternative approach is to use a TDMA-based protocol that contains a beacon, a configurable contention access period (CCAP), and a contention free period (CFP) [21]. Unlike the IEEE 802.15.4, this protocol is required to use a slot ted-ALOHA protocol in the CCAP instead of CSMA/CA. The CCAP period should contain few slots (3 or 4) of equal duration and can be used for short data transmission and a guaranteed time slot (GTS) allocation. To enable a logical connection between the in-body and the on-body communication systems, a method called bridging function can be used as discussed in [21]. The bridging function can integrate in-body and on-body nodes into a WBAN, thus satisfying the MAC transparency requirement. Further details about bridging function are given in [22]. 3.4. Discussion Since the CSMA/CA is not suitable due to unreliable CCA and heavy collision problems, it can be seen that the most reliable power-efficient protocol is a TDMA-based protocol. Many protocols have been proposed for a WBAN and most of them are based on a TDMA-based mechanism. However, all of them have pros and cons for a real WBAN system that should operate on Multi-PHYs (MICS, ISM, and UWB) simultaneously. The MAC transparency has been a hot topic for the MAC designers since different bands have different characteristics in terms of data rate, number of channels in a particular frequency band, and data prioritization. A good MAC protocol should enable reliable operation on MICS, ISM, and UWB etc bands simultaneously. The main problems are related to MICS band due to FCC restrictions [23]. According to FCC, â€Å"Within 5 seconds prior to initiating a communications session, circuitry associated with a medical implant programmer/control transmitter must monitor the channel or channels the MICS system devices intend to occupy for a minimum of 10 milliseconds per channel.† In other words, the coordinator must perform Listen-before-talking (LBT) criteria prior to a MICS communication sessions. The implants are not allowed to