Friday, September 6, 2019
The Coffee Crisis Essay Example for Free
The Coffee Crisis Essay Introduction In 2011, Diego Comin, Associate Professor of Business Administration at Harvard Business School, revised his 2009 case study on the Great Moderation (reproduced by permission for Capella University, 2011). The case explores whether or not the Great Moderation, defined by investopedia. com as ââ¬Å"the period of decreased macroeconomic volatility experienced in the United States since the 1980ââ¬â¢s [during which] the standard deviation of quarterly real GDP declined by half, and the standard deviation of inflation declined by two-thirds (para.1)â⬠is still in effect. This paper will use evidence from research in a draft by Pancrazi and Vukotic (2011) that proposes ââ¬Å"macroeconomic variables in the last thirty years have not only experienced a reduction in their overall volatility, but also an increase in their persistence (p. 2). â⬠The 2011 research paper also purports that ââ¬Å"by using a New-Keynesian macroeconomic model the responsiveness of output variance to changes in the monetary policy decreases with an increase in the persistence of technology (p. 2). â⬠The result, according to Pancrazi and Vukotic, is an ââ¬Å"overestimateâ⬠of the monetary influence and authority to ââ¬Å"smooth out the real economic dynamics (p. 2). â⬠The Great Moderation and the The Great Recession. Comin, in ââ¬Å"The Great Moderation, Dead or Alive? â⬠(Capella, 2011), quotes Ben Bernanke, Chairman of the Federal Reserve: reduced macroeconomic volatility has numerous benefits. Lower volatility of inflation improves market functioning, makes economic planning easier, and reduces the resources devoted to hedging inflation risks. Lower volatility of output tends to imply more stable employment and a reduction in the extent of economic uncertainty confronting households and firms. The reduction in the volatility of output is also closely associated with the fact that recessions have become less frequent and less severe (p. 17). â⬠Comin points out that these conditions existed until the Great Recession of 2007 when the U. S. and other countries experienced the longest period of recession and ââ¬Å" the largest GDP contraction in the U. S. since the Great Depression (p. 17). In ââ¬Å"Overlooking the Great Moderation, Consequences for the Monetary Policyâ⬠(2011), the researchers hypothesize that the ââ¬Å"Great Moderation might have been fertile ground for the recent recession (p. 3), in that technology caused an ââ¬Å"increased persistence in the macroeconomic variables (p. 4). â⬠Macroeconomic Observations. To summarize Cominââ¬â¢s (2011) account of macroeconomic activity in the U. S between 1930 and 2010, when observing the GDP during this period, he says, ââ¬Å"it is clear that since around 1984 it has been harder to observe large deviations from the average growth rate (p.17). â⬠When examining other macroeconomic variables, Comin says that hours worked, consumption, investment, labor productivity, and total factor productivity (TFP), have, for the most part, ââ¬Å"experienced stabilization by roughly the same magnitude, [where] the stock market has not stabilized significantly. If anything, it has become more volatile over the last few decades (p. 18). â⬠Pancrazi and Vukotic focus their research on ââ¬Å"studying the behavior of the total factor productivity (TFP) before and after the Great Moderation (p.4)â⬠¦[by] using a basic New-Keynesian model featuring imperfect completion and price stickiness, [to ascertain] whether a change in the persistence of TFP affects the responsiveness of the real variables to the monetary policy (p. 6). â⬠Their observations include an examination of the stability of TFP and an assessment that ââ¬Å"a higher Microeconomic impact of the coffee crisis. The case study conveys that ââ¬Å"coffee was the main source of income for roughly 25 million farmers, mostly small land holders, in Latin America, Africa, and Asia (p.1). â⬠The coffee crisis created immense hardship for these small producers; ââ¬Å"in some countries, farmers had been forced to take their children out of school and put them to work (p. 1). â⬠One of the consequences of the coffee crisis that was less publicized was how larger farms and their workers were devastated. Large farms generally do not use non-cash family workers, like many of the smaller farmers do; as a result of the crisis, many workers were laid off, subsequently putting larger farms completely out of business. (Price, 2003) Where some producers chose to get out of the coffee business and venture into unknown territory with a new crop, others either attempted to break into the coffee ââ¬Å"nicheâ⬠market or decrease their outputs. (Line Tickell, 2003) In the ICO report on the impact the coffee crisis has had on poverty, the socio-economic impact reported by the respondent countries is filled with narratives that describe families and farmers who worked in the coffee industry unable to pay for medicine, food, and other essentials. Families are also reported to have migrated to cities, where there is typically no work for skilled farmers; some countries report that workers have migrated leaving their families behind. (Osorio, 2003) Solutions for long term sustainability. The case study presents an outline of solutions recommended by the ICO, Technoserve (as reported to the Inter-American Development Bank) and Oxfam. ââ¬Å"The Coffee Crisisâ⬠states that, according to Oxfam, ââ¬Å"the long run solutionâ⬠¦was a commitment to ââ¬Ëfair tradeââ¬â¢Ã¢â¬ ¦ a system in which a buyer in the first world agrees to pay third-world producers enough to support a decent living (p.5). â⬠Oxfam says that ââ¬Å"the fair trade movement was designed to provide an assured income and other benefits to the farmers associated with it (Line Tickell, 2003, p. 8). â⬠Technoserve believes the following ââ¬Å"three areas offer the highest potential for sustainable impact: 1. Increasing coffee consumption in producer countries and emerging market countries; 2. Assisting unprofitable producers of high-quality Arabica to move into higher-priced specialty coffees; and 3. Helping regions with a high concentration of marginal coffee producers who cannot differentiate their product or compete on price to diversify into other products and industries (para. 15 16). â⬠In June, 2004, Nestor Osorio of the ICO presented to the United Nations Conference on Trade and Development (UNCTAD) a report titled: ââ¬Å"Lessons Learned from the Coffee Crisis: A Serious Problem for Sustainable Development. â⬠In it he outlines the economic strategies he believed would prevent a future crisis and assist coffee producer toward long-term sustainability. Two proposed policies address the supply-demand problem: 1. To use the experience of the coffee crisis to create awareness ââ¬â best achieved through the ICO ââ¬â in national and international bodies of the danger of embarking on any projects or programmes (sic) which will further increase supply; and 2. Working to increase the benefits accruing from value-added products rather than traditional bulk commodity exports. Osorio recognizes the importance of ââ¬Å"the need for market development to increase demand (p. 5)â⬠also. He says that projects intended to benefit the supply chain should include actions from farmer to consumer, as well as farmer to exporter. These include: 1. ââ¬Å"Support for the ICOââ¬â¢s Quality-Improvement Programme as a means of improving consumer appreciation and consumption of coffee; 2. Action to increase consumption in coffee-producing countries themselves, which should have a number of positive effects such as providing an alternative market outlet, increasing producer awareness of consumer preferences, stimulation of small and medium enterprises, etc. as well as acting to increase demand; 3. Action to enhance knowledge and appreciation of coffee in large emerging markets such as Russia and China, following the successful ICO campaigns in the 1990s; and 4. Protecting consumption levels in traditional markets through quality maintenance, development of niche markets and dissemination of positive information on the health benefits of coffee consumption. (p. 5-6). â⬠Conclusion The coffee market has been described as an ââ¬Å"imperfect market; a market that in recent years has failed ââ¬â both in human and economic terms (Lines Tickell, 2003, p. 8). â⬠The coffee crisis illuminated the impact the market had on international trade, national economies, businesses and families many in underdeveloped, low income countries. Because the regions where coffee can be grown are also many times third-world or repressed countries, coffee production is considered a humanitarian concern as well as an economic issue. Where an organization like Technoserve may lean toward business partnership solutions for the coffee industry, and Oxfam may concentrate on the humanitarian perspective, the International Coffee Organization appears to have taken a balanced approach in presenting the plight of coffee producers from both altruistic and economic perspectives. Where it is understood that many depressed areas and nations depend on coffee crops for sustenance, the ICO has taken a stand that the lessons learned from the coffee crisis must be solved with the tenets of economics, coupled with social responsibility, if families, farms, businesses and coffee-producing nations are going to achieve long-term sustainability. References Capella University. (Eds. ). (2011). MBA6008: Global Economic Environment. New York, NY: McGraw-Hill. Lines, T. , Tickell, S. (2003, May 1). Walk the Talk, Oxfam International Briefing Paper, May, 2003. Oxfam International | Working together to find lasting solutions to poverty and injustice. Retrieved May 5, 2012, from www. oxfam. org/sites/www. oxfam. org/files/walk. pdf Osorio, N. (2002). ICO. org Documents/Global Crisis. International Coffee Organization. Retrieved May 4, 2012, from dev. ico. org/documents/globalcrisise. pdf Osorio, N. (2003). ICO. org Documents/G-8. International Coffee Organization. Retrieved May 4, 2012, from dev. ico. org/documents/g8e. pdf Osorio, N. (2004). ICO. org Documents/UNCTAD. International Coffee Organization. Retrieved May 4, 2012, from dev. ico. org/documents/UNCTAD. pdf Prince, M. (2003, December 3). CoffeeGeek Coffee Crisis:TechnoServe Releases Fact-Based Industry Analysis. CoffeeGeek News, Reviews, Opinion and Community for Coffee and Espresso. Retrieved May 5, 2012, from http://coffeegeek. com/resources/pressreleases/technoservedec42003.
Thursday, September 5, 2019
SWOT and PESTEL Analysis of Nokia
SWOT and PESTEL Analysis of Nokia Introduction: Nokia is the one of the big giant in the world. It is leading the market 1992. In very beginning Nokia century start with Fredrik Idestams paper mill on the banks of the Nokianvirta River. From 1865 to 1967, the company would become a major industrial power; but it merge with a cable company and a rubber firm to start the new Nokia Corporation to path of electronics. The newly establish Nokia Corporation was ideally placed for a leading role in the early development of mobile communications. As European telecommunications markets were easy to access and mobile networks became universal, Nokia lead the way with some legend products. In 1992, Nokia determined to go ahead with telecommunications business. This was possibly the most vital strategic decision in its history. As acceptance of the GSM pattern grew, new CEO Jorma Ollila makes Nokia the leader of the mobile telephone industry worldwide. Nokias keep continues involvement with 3G, mobile multiplayer gaming, multimedia devices an d also look ahead for the future. Nokia sells its billionth mobile phone as the third generation of mobile technology emerges. Even the though Nokia main business is producing and sell mobile phone. In addition, it got Navteq section, which produce sat nav .it also sell networking technology, call Nokia Simence network. Nokia sell their networking technology to countries. However, Nokia is not only player in the market, there are many companies, particularly the main competitors are Apple , Blackberry, Samsung, Motorola, HTC etc. in the these competitor Apple is the main rival . This report is on the base of UK mobile market. Here we are going to illustrate about Nokia and its closest competitor Apple. SWOT and PESTEL analysis SWOT stand for strength, weakness, opportunity and threat. (See appendices 2) Here strength and weakness is measured as internal factors. Opportunity and threat is external factors. The work of Kenneth Andrews (1971, p47)à [i]à has been especially influential in popularizing the idea that good strategy means ensuring a fit between the external situations a firm faces (Threats and opportunities) and its own internal qualities or characteristics (strengths and weaknesses).Manufacturing strategy can be seen as reflecting this idea of fit in functional terms. Those two factors can come from any environment elements which include PESTEL. PESTEL indicate macro environment or external factors. PESTEL means political, economical, societal, technological, ecological and legal. PESTEL analysis briefly describe about external factor. In this section the SWOT and PSTEL analysis will done to show Nokias competitive position against Apple. First of all Nokia has a strong brand name all over UK, which giving them the dominating power over other brand. Secondly it got wide range of product to target loots of segments in the market. However Apple got strong image in people mind and replacing the Nokia eventually. But it got a weakness of having only one model for serve all type of customer. Nokia phone is very user friendly then other phone. On the other hand App le phone is pretty much technical oriented, which find bit complicated. In terms of distribution channel Nokia got an extended distribution channel than any other brand, where Apple got less distribution channel compare to Nokia. After sell service is superior in Apple then Nokia. In terms of product Apple product are more robust then Nokia. Nokia got strong financial background which help to do innovation and change the model very frequent. There is lots discussion going on about green house effect. Nokia has responded to this situation positively. Nokia is now continuously developing their product, which is environment friendly. The devices shown here introduce eco innovations, and solutions for a greener lifestyle, this practice is will implement more widely in future. They using energy efficient technology in the device and production, they using 100% recyclable bag and device recycle option. Very remarkable change is elimination of user manual and providing that in the device. According to the corresponded of Nokia corporation Petteri Alinikula, Nokia has reduce its power consuming by 65%, which is about one third of the total use, by using recyclable materials and new efficient way of production. And Nokia has become eco friendly company. Nokia phone price is higher than other which is one of their weaknesses. Its got long time waiting in service, which make longer downtime. In contrast Apple got very low service time which gives them some differentiation over their competitor, to compete with other rival brand. Nokia got some highly technical model, which come out with slower performance and not user friendly. Nokia still got very wide market and it market all over the world, which gives them more scope to target more segments then their competitor. Because they got bigger market share they easily reduce price and get competitive advantage. At the same time Apple got little market in worldwide, but it has the opportunity to have big global market, which able to get more economy of scale and also more profitability. telecommunication market is growing rapidly global , which one the door of huge opportunity of future profitability, through excessive communication and aggressive advertising Nokia can still get the highest position in the market. For Apple it is a great threat as it does not doing aggressive marketing. The emerging threat for Nokia is fast goring mobile companies in all over the world ,it also a big threat for all other competitor like Apple ,Samsung ,Blackberry etc. there is a sharp trend of technological development ,which bring a big challenge for these two competitors ,one example is that Nokia and Apple is doing very good on 3G network, but the world going so first and now dealing with 4G technology like wimax. Instead of Apple and Nokia all other company like Motorola, Samsung and Dell are already start making 4G compatible phone. This could affect on current market share. At the moment Nokia do not face any political issue in UK. In addition their mother company is from Europe, they should have some advantage in terms of business and tax. But they might face political complication as they out sources their production .Government may force Nokia stop doing of shoring, which will increase the cost and reduce profitability. Nonetheless they also can face obligation form other c ountries, where they outsources from. Nokia need to react on situation change and have some contingency plan. In addition Mobile phone increases crime. Due to mobile phone they crime is easier to occur. The country like UK has facing crime like mobile scam and fraud. To face these problem Nokia making the devices more secure, by providing GPS. Nokia has also come along with fare trade production, as they introduce scheme to save wild life, called SOS (Save Our Species). They also providing in education by mobile devices .they also providing education in undeveloped countries in coordination with UNESCO. In the project call Education for all goals. Economy is changing so rapidly, it is because of high tech, globalization. Which influence people buying power. All the companies need to reduce their price and promotion and also need to re -segment to cope with the change. Especially in UK market has currency fluctuation rate is very high then other Europian market. There is big issue in pricing; moreover due to recent recession in UK people buyer has reduced. Which was hamper the total sells of mobile industry? However, as the world economy is moving towards virtual economy, like ecommerce, E-business, take marketing, distance learning, tele medicine and e-governess that open a wide path for the whole mobile industry in the world. These increase scope of new market and revenue. Legal is quite hard to face mobile company. But still phone companies are penalize due to legal issue. For instance Nokia has accused Apple for violating copy right low. Which influence the bad publicity and also need to compensate a huge amount of money. Marketing strategy Marketing strategy deal with the issue like what should happen, how to should happened and when and who will responsible for that.à [ii]à Marketing strategy is mainly combined 4ps, product price and places. It mainly start with the product .product is the core element of any product. Business organizations have increasingly recognized the value of placing a broad definition on their products, one that emphasizes the basic customer need (s) being served. PHILIP KOTLER (1969, p.13).à [iii]à So the product is what people aspect from any product. For mobile, the need is the communication. However, According to Kotler (1969) Diagram (see appendices 3) there are three level of product. The core of the product is the benefit which is the communication. Actual product is brand name with Nokia future is SMS, MMS, and WAP. Style is unique model and attractive packaging and the quality is the less downtime then others substitute product. Augmented product is warranty, customer service, distribution channel (Orange, Voda phone, O2 in UK). By this illustration we see that mobile phone is not only physical product its a product service bundle. Figure 1 Service We can see it more clearly in figure 1.as our product is a combination of product and service. The physical product is visible and the service, which is not visible. As service include the customer service, warranty, call quality. So To insure customer satisfaction Nokia not only need to confirm is product quality in durability. But also its before and after sell service. This will also add value to their customer. Then the product enforces the pricing strategy. The pricing strategy depend on two different factors external and internal .In the case of Nokia the external factor is the competitor cost, price and offer and internal factors are mainly on marketing objective like Nokia believe in mass customization. To satisfy all segment needs it got premium pricing product ( smart phone ,multimedia phone ) to low price phone (basic use phone ). Their price all so very in depend on model association and colour, they charge more for the best selling colour then other colour. For example Nokia charge high price for Nokia N95 black model then other colour. This indicate the do consider market demand and price elasticity. It is also clear the Nokia is concern about market penetration, which helps to have gained their corporate strategy. However Apple is a premium brands their attempted not to become cost leader, they like to take skimming pricing and premium price strategy. They are trend to start with high price and eventually they reduce the price. Apple use price elasticity to maintain the price to match the demand .They also reduces the price of slow moving product. Moreover Apple product shows prestige of the users, so people is buys not only product, but also prestige. So it is clear that Apple is the differentiator. As differentiator Apple should have unique selling proposition, which will influence customer to switch the brand. This reinforcing customer image towards the brand and give them fell that they are getting more for their money By having short time low price Apple stimulating short time volume, when the market demand is less. On the other hand having low price help Apple to compete with their competitor and replacing customer brand loyalty. However Nokia got the different price strategy and different product rage. Nokia products are divided in three section devices manufacturing and networking. In device section Nokia mainly do mobile phone but they also do notebook pc, tablet and mobile accessories. Navteq is another section of product, where they sell sat nav and Nokia Semence network, which they joint venture with Siemens and establish network for mobile operator. However, their main competitor Apple doses more products beside mobile phone. It has portable computer, server, accessories, Wi-Fi base station, mp3 player, online TV, peripheral products like video camera, printer. After the price the place need to select. Nokia has it market all over the world. Their main region is Europe, Asia pacific, Middle East, Africa, America, Hongkong, China, and Taiwan. Among those countries Nokia got strong base in European and Asia pacific market. In America they are dominated by Apple, as Apple is an American company. Apples headquarter located in California. Moreover Apple got retail store in three major counties, it got 200 stores in US, UK, and Canada. Apple recently moves to China. By moving to china they got two benefits, low cost out sourcing and huge local market. After the place nest thing come is the people involved in the business. If want to say about people, we need to say about the customer service, which they got all over the Europe and US. Nokia got service centres in every city to stay close to their customer. It has international warranty for its all types of phone no matter where it was bought form. Even thought they got long repair time, but they provide free device repair and certain level of free parts if necessary. Apple got highly trained professional to provide better service. And from front desk to highly technical professional Apple put the right people to right place. Apple is very popular for providing better after sell service. Apple customer service is accessible for all customers near and far, as they provide service in so many ways, phone, video conferencing and web support. In some area where the repair takes long time, Apple provides direct replacement .which give mare value to their service. After the service next thing come out is physical existence. Nokia got customise website for all different region. In their websites they provide device information, price and also the technological solution. In addition they got call centre support in local area wise. Nokia got showroom in every cities. However, Apple web site is www.apple.com. Which allow people to navigate to more customise websites, base on their location. Apple logo, the half eaten rainbow apple represent the Sir Isaac Newton memorable apple, which let people remember their brand easily. Apple got showroom in every big city, which verdict their existence. In addition Apple use different relationship event like Mac World Expo, advertising through different channel to communicate to different customer type. And they convert new customer to loyal customer by providing high quality customer service. Apple also increasing their distribution channel by adding Wall Mart in US and add mobile operator Orange, Tesco, Voda phone as new distribution channel in UK. On the other hand Nokia got distinctive quality then Apple product, Nokia phone are compatible for any network. It has user friendly operating system. It is famous for uses and its model is quite durable in the market. To see this we can analysis product life cycl e of Nokia. Product life cycle Product life cycle is the graph, shows product from introductory to decline stage of product and also explain the measure to control that situation. (See appendices 4) To analysis of product cycle, we use Nokia N95 model and US as a place. We chose US because in the introductory stage Nokia struggle to introduce their third generation phone in US .First N95 was introduce in March 2007,on the time it was on introductory level it got very low sell, so that they start selling it in discounted price in America in Sep 2007. Then it develops the hardware and start selling it in Europe and Asia, which was version N95-2. Later on when they upgrade to version 3, they start selling it in US through ATT. Then they enjoy the steady selling in US market until the new competitor Apple come with new generation Iphon. On their decline stage Nokia increase their promotion and then decided to give free internet with their smart phone including their N95 to compete with Iphone. But still the sell was g oing down. After that they decide to completely change the model and introduce N96. This helps them to keep their market share. However, to find out how good market base they got, we can look in to market share analysis Market share analysis Market share is percentage of selling unit of total unit sell in whole industry by all companies. To analysis the market share we are looking at the smart phones market share worldwide in 2009. Recent research (GSM Dome, Feb2010) shows that Apples iPhone doubled its global smart phone market share to 14.4%, increase 6.2% from 2008, while Nokias Symbian phone keeps their first place in this section. Nokias market share reduce by 5.5 point, while RIM and its Blackberries come second, with a 3.3 point increase in market share, getting a 19.9% share. In the States, BlackBerry and Apple have dominated market share than Symbian and as far as mobile use for communication, Apple and Android got the segment, with 81% market share. In addition, Android phone sales enlarged by 3.4 points, up to 3.9%, but still Android following Windows Mobile and Linux. The first platform had its market share drop 3.1 points, while Linux saw a cut of 2.9%. Palms WebOS have market share of 0.7%. The same research shows that 172 million smart phones were sold in 2009, 24% more then 2008. The iPhone stood for 2% of all mobile phone sales last year.à [iv]à (See appendices 5). In addition, according to Timo Ihamuotila, chief financial officer of Nokia Oyj (2010), their market share in 2010 will decries on 3%,(see table 1). At the same time their profitability has increase. Now Nokia is mere concern for their product price, as their ASP (average selling price) is going up. They also develop the experience of better and unique operating system for all their devices. This will make them differentiator. So finally Nokia is moving from mass customising to differentiator.à [v]à After looking the worldwide market share, let see how Nokia positioning in UK market. Positioning Positioning is depending on customer view toward to the product and the price. In this comparison we choose smart phone of different companies. We compare them by price and quality (matured by ranking 1-5) to show how dose defffernt brand giving the best value for their customers. To drow the posotioneing map we collect the the data from three different web site Nokia, Apple and carephone wearhouse web site. (See appendices 6) Positioning map By compering these we can see that Apple got hihgst quality in the market and Motorola got the lowest. If we see Nokia it is just after Apple and before Blackberry. Which show a nagivite impact on Nokia but if we see the prise, Nokia is giving the best value of price compare to their competitors. So there is a high possibility for Nokia to oprovide high quality in smart price, which will give them another competitive edge. Boston consulting The acronym of BCG is Boston Consulting Group, which general management is consulting from, famous business strategy consulting. From late 1970s BCG Growth-Share Matrix widely accepted and start taught in educational institutions. (See appendices 7) The choice of the product life cycle and market share as the two contingent variables has an added attraction. When combined, they form the framework for the BCGs product portfolio matrix (Henderson, 1979, p513).à [vi]à This administration tool has four different segments, including Stars, Cash Cows, Question Marks and Dogs. According to BCG growth share matrix Nokia is in cash cow position. Nokia has the profitability. They are the market leader, with a low growth market share. As cash cow Nokia should fund for their own growths. They pay the corporateà dividends. They pay the corporate overhead. Nokia had exactly same position until the new player Apple Marge in the market. This new competitor could Nokia to lose their profitability and slower their market acquisition .which will eventually repeals Nokia from their position. As a consequence Nokia share price is going down. See However apple is in stars position. They are the market leader in high growth markets. They tend to generate higher amount of revinue, but have to utilize certain amount of cash due to growth of market conditions. For example, Apple is reducing their Iphone price in UK to quickly penetrate the market. Ansoff matrix Theà Ansof matrixà presents the product and market choices available to anà organisation. Where it indicate to penetrate new market, product development, movie to new market and product diversification. (See appendices 8) According to Ansof Matrix (Ansoff, 1957) Market penetration starts, when any organization tries to penetrate existing market with current product. Companies use this strategy to increase the sell without changing product-market strategy. If we discuss about Nokia N95, It was first intro introduce in US in April 2007, Nokia N95 was sold in flagship store in discounted price. Then august 2007 Nokia N95 version 2 come out and start selling it in normal price in America. This is clearly defining that Nokia use the Ansof matrix to penetrate the US market. Another strategy is product development .which means new innovation or new product design. Normally it uses to utilize the spare capacity, make harder environment for competitor, reputation of RD and acquiring new technology. For example N8 is a new innovation of Nokia, which let them to have their market, keeping the brand image and also keep them competitive against their competitor. Financial analysis Gross margin,à gross profit marginà orà gross profit rateà is the amount of sells revenue after production cost. It is excludingà overhead,à payroll,à taxation, andà interestà payments. It expresses the relationship betweenà gross profità and salesà revenue. It is a measure of how well each dollar of a companys revenue is utilized to cover the costs of goods sold.à [vii]à In the financial analysis part we are going to analyse the EUROPE base, as there is no individual financial data on UK. In appendices 9 one we can see the gross profit margin has been reduce 1.474% in year 2009 compare to year 2007, which should affect the profitibily. Profit margin,à net margin,à net profit marginà orà net profit ratioà all refer to a measure ofà profitability. It is calculated by finding theà net profità as a percentage of theà revenue.à [viii]à Profit margin is an indicator of a companys pricing strategies and how well it controls costs. Differences in competitive strategy and product mix cause the profit margin to vary among different companies.à [ix]à But if few see the net profit margin in appendices 10, it seems that the net profit margin has gone down more compare to gross profit. The amount reduces to 12.578%, which is a huge loose in Nokias profitability. This could be due to the fluctuation of currency and recession. Theà current ratioà is aà financial ratioà that measures the company capacity to pay its liability over the next year. It compares a firmsà current assets with itsà current liabilities. The current ratio is an indicator of company capability to meet creditors demand. If companys current ratio is in their acceptable range, then it is generally considered to have positive short-term financial strength. Nokia got current ratio 1.55 (see appendices 11)which is not so much appricited, still it is not bad as it got long term future plan. Those will incrise the ratio. For example in Nokia Simence network need a high investment and it takes long time to get the profit. This could be the reason of having less assets then the liability. The acid-test ratio is far clearer than the operational capital ratio, mainly because the operational capital ratio allows for the inclusion of inventory assets. Companies having ratiosà of less than 1 cannot afford their current liabilities and should be looked at with extreme concern. Furthermore, if the acid-test ratio is much lower than the operational capital ratio, it means current assets are highly dependent on stock. Retail stores are examples of this type ofà business. In the case of Nokia, it has ration which has less than 1(see appendices 12); it means that Nokia do not have enough cash to meet is current short time liability. à Return on invested capitalà (ROIC) is a financial measurement which indicate that efficiency a company maintaining cash flow relative to the capital it has invested in its business. When the return on capital is greater than theà cost of capitalà (usually measured as theà weighted average cost of capital), the company is creating value; when it is less than the cost of capital, and value is destroyed. According to appendices 13, in 2007 Nokia got return of investment of 42.87%, which was very good to for any company. However 2008 it was reduced to 25.828% in the next year which was still good but not favourable as before. Finally in 2009 it reduces to 5.825%, which create very complicated situation. This is affecting company share holder income, which led to reduce the share price. Recommendation As Nokia got very tight competition, Nokia need to be very careful for choosing any strategy. Nokia is responding with the environment, which is appreciable. But Nokia still need to quicker in responding technological change, as technology is changing more rapidly than anything else. Again Nokia has increase ASP (average selling price), which will increase the profitability unite wise but might not going to much effective. Because if the total selling amount get decline ,only having extra revenue gathered by ASP will not able to make up the total profit. We believe that Nokia might need to rethink about their marketing mix. Consumer is very careful what are they getting. So Nokia need to very much careful about the performance of their product. Especially, their operating system needs to speed up. Nokia also need to speed up their after sell service. Those all Nokia can do to reinforce their market .However, if look at the financial analysis. We can see a huge change in their profita bly. There is a sharp decline in Nokia profit margin, more than 12% in last three years. This indicates step reduction of the sell. This will affect the share price and de motivate stake holder to invest in Nokia. Then analysis the current ratio Acid test ratio, we see that Nokia do not have enough assets to pay its liability. As it is less than one. But still there is hope as Nokia is investing in Nokia Simence network, which need high investment in initial time and it takes time to get the profit. Finally comparing both the opportunity and current wealth of Nokia we suggest Nokia to diversify from to mobile business, as it in high risk of having less market share then before and high price device. This will reduce potential buyer for higher price. In the same time by looking the opportunity of new extension of Navteq and Nokia Siemens network, the reasone for referring these two sections is, they got high level of possibelity. If we see current market they are only few competitors in sat nav industry. There are is very high prospective market base in UK and Europe. Another plus point for Nokia is they know these markets very well and they already got existing distribution channel. So they is high possible of new market .moreover they can continue built network for different operator. As they are joint venture with Siemens, which give the both company more financial and technological strength. So we suggest continuing investment in these two sections and eliminating the mobile section to stop draining the capital.
Wednesday, September 4, 2019
Literature review about data warehouse
Literature review about data warehouse CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION Chapter 2 provides literature review about data warehouse, OLAP MDDB and data mining concept. We reviewed concept, characteristics, design and implementation approach of each above mentioned technology to identify a suitable data warehouse framework. This framework will support integration of OLAP MDDB and data mining model. Section 2.2 discussed about the fundamental of data warehouse which includes data warehouse models and data processing techniques such as extract, transform and loading (ETL) processes. A comparative study was done on data warehouse models introduced by William Inmons (Inmon, 1999), Ralph Kimball (Kimball, 1996) and Matthias Nicola (Nicola, 2000) to identify suitable model, design and characteristics. Section 2.3 introduces about OLAP model and architecture. We also discussed concept of processing in OLAP based MDDB, MDDB schema design and implementation. Section 2.4 introduces data mining techniques, methods and processes for OLAP mining (OLAM) which is used to mine MDDB. Section 2.5 provides conclusion on literature review especially pointers on our decision to propose a new data warehouse model. Since we propose to use Microsoft à ® product to implement the propose model, we also discussed a product comparison to justify why Microsoft à ® product is selected. 2.2 DATA WAREHOUSE According to William Inmon, data warehouse is a subject-oriented, integrated, time-variant, and non-volatile collection of data in support of the managements decision-making process (Inmon, 1999). Data warehouse is a database containing data that usually represents the business history of an organization. This historical data is used for analysis that supports business decisions at many levels, from strategic planning to performance evaluation of a discrete organizational unit. It provides an effective integration of operational databases into an environment that enables strategic use of data (Zhou, Hull, King and Franchitti, 1995). These technologies include relational and MDDB management systems, client/server architecture, meta-data modelling and repositories, graphical user interface and much more (Hammer, Garcia-Molina, Labio, Widom, and Zhuge, 1995; Harinarayan, Rajaraman, and Ullman, 1996). The emergence of cross discipline domain such as knowledge management in finance, health and e-commerce have proved that vast amount of data need to be analysed. The evolution of data in data warehouse can provide multiple dataset dimensions to solve various problems. Thus, critical decision making process of this dataset needs suitable data warehouse model (Barquin and Edelstein, 1996). The main proponents of data warehouse are William Inmon (Inmon, 1999) and Ralph Kimball (Kimball, 1996). But they have different perspectives on data warehouse in term of design and architecture. Inmon (Inmon, 1999) defined data warehouse as a dependent data mart structure while Kimball (Kimball, 1996) defined data warehouse as a bus based data mart structure. Table 2.1 discussed the differences in data warehouse structure between William Inmon and Ralph Kimball. A data warehouse is a read-only data source where end-users are not allowed to change the values or data elements. Inmons (Inmon, 1999) data warehouse architecture strategy is different from Kimballs (Kimball, 1996). Inmons data warehouse model splits data marts as a copy and distributed as an interface between data warehouse and end users. Kimballs views data warehouse as a unions of data marts. The data warehouse is the collections of data marts combine into one central repository. Figure 2.1 illustrates the differences between Inmons and Kimballs data warehouse architecture adopted from (Mailvaganam, 2007). Although Inmon and Kimball have a different design view of data warehouse, they do agree on successful implementation of data warehouse that depends on an effective collection of operational data and validation of data mart. The role of database staging and ETL processes on data are inevitable components in both researchers data warehouse design. Both believed that dependant data warehouse architecture is necessary to fulfil the requirement of enterprise end users in term of preciseness, timing and data relevancy 2.2.1 DATA WAREHOUSE ARCHITECTURE Although data warehouse architecture have wide research scope, and it can be viewed in many perspectives. (Thilini and Hugh, 2005) and (Eckerson, 2003) provide some meaningful way to view and analyse data warehouse architecture. Eckerson states that a successful data warehouse system depends on database staging process which derives data from different integrated Online Transactional Processing (OLTP) system. In this case, ETL process plays a crucial role to make database staging process workable. Survey on factors that influenced selection on data warehouse architecture by (Thilini, 2005) indentifies five data warehouse architecture that are common in use as shown in Table 2.2 Independent Data Marts Independent data marts also known as localized or small scale data warehouse. It is mainly used by departments, divisions of company to provide individual operational databases. This type of data mart is simple yet consists of different form that was derived from multiple design structures from various inconsistent database designs. Thus, it complicates cross data mart analysis. Since every organizational units tend to build their own database which operates as independent data mart (Thilini and Hugh, 2005) cited the work of (Winsberg, 1996) and (Hoss, 2002), it is best used as an ad-hoc data warehouse and also to be use as a prototype before building a real data warehouse. Data Mart Bus Architecture (Kimball, 1996) pioneered the design and architecture of data warehouse with unions of data marts which are known as the bus architecture or virtual data warehouse. Bus architecture allows data marts not only located in one server but it can be also being located on different server. This allows the data warehouse to functions more in virtual mode and combined all data marts and process as one data warehouse. Hub-and-spoke architecture (Inmon, 1999) developed hub and spoke architecture. The hub is the central server taking care of information exchange and the spoke handle data transformation for all regional operation data stores. Hub and spoke mainly focused on building a scalable and maintainable infrastructure for data warehouse. Centralized Data Warehouse Architecture Central data warehouse architecture build based on hub-and-spoke architecture but without the dependent data mart component. This architecture copies and stores heterogeneous operational and external data to a single and consistent data warehouse. This architecture has only one data model which are consistent and complete from all data sources. According to (Inmon, 1999) and (Kimball, 1996), central data warehouse should consist of database staging or known as operational data store as an intermediate stage for operational processing of data integration before transform into the data warehouse. Federated Architecture According to (Hackney, 2000), federated data warehouse is an integration of multiple heterogeneous data marts, database staging or operational data store, combination of analytical application and reporting systems. The concept of federated focus on integrated framework to make data warehouse more reliable. (Jindal, 2004) conclude that federated data warehouse are a practical approach as it focus on higher reliability and provide excellent value. (Thilini and Hugh, 2005) conclude that hub and spoke and centralized data warehouse architectures are similar. Hub and spoke is faster and easier to implement because no data mart are required. For centralized data warehouse architecture scored higher than hub and spoke as for urgency needs for relatively fast implementation approach. In this work, it is very important to identify which data warehouse architecture that is robust and scalable in terms of building and deploying enterprise wide systems. (Laney, 2000), states that selection of appropriate data warehouse architecture must incorporate successful characteristic of various data warehouse model. It is evident that two data warehouse architecture prove to be popular as shown by (Thilini and Hugh, 2005), (Eckerson, 2003) and (Mailvaganam, 2007). First hub-and-spoke proposed by (Inmon, 1999) as it is a data warehouse with dependant data marts and secondly is the data mart bus architecture with dimensional data marts proposed by (Kimball, 1996). The selection of the new proposed model will use hub-and-spoke data warehouse architecture which can be used for MDDB modelling. 2.2.2 DATA WAREHOUSE EXTRACT, TRANSFORM, LOADING Data warehouse architecture process begins with ETL process to ensure the data passes the quality threshold. According to Evin (2001), it is essential to have right dataset. ETL are an important component in data warehouse environment to ensure dataset in the data warehouse are cleansed from various OLTP systems. ETLs are also responsible for running scheduled tasks that extract data from OLTP systems. Typically, a data warehouse is populated with historical information from within a particular organization (Bunger, Colby, Cole, McKenna, Mulagund, and Wilhite, 2001). The complete process descriptions of ETL are discussed in table 2.3. Data warehouse database can be populated with a wide variety of data sources from different locations, thus collecting all the different dataset and storing it in one central location is an extremely challenging task (Calvanese, Giacomo, Lenzerini, Nardi, and Rosati, , 2001). However, ETL processes eliminate the complexity of data population via simplified process as depicts in figure 2.2. The ETL process begins with data extract from operational databases where data cleansing and scrubbing are done, to ensure all datas are validated. Then it is transformed to meet the data warehouse standards before it is loaded into data warehouse. (Zhou et al, 1995) states that during data integration process in data warehouse, ETL can assist in import and export of operational data between heterogeneous data sources using Object linking and embedding database (OLE-DB) based architecture where the data are transform to populate all validated data into data warehouse. In (Kimball, 1996) data warehouse architecture as depicted in figure 2.3 focuses on three important modules, which is the back room presentation server and the front room. ETL processes is implemented in the back room process, where the data staging services in charge of gathering all source systems operational databases to perform extraction of data from source systems from different file format from different systems and platforms. The second step is to run the transformation process to ensure all inconsistency is removed to ensure data integrity. Finally, it is loaded into data marts. The ETL processes are commonly executed from a job control via scheduling task. The presentation server is the data warehouse where data marts are stored and process here. Data stored in star schema consist of dimension and fact tables. This is where data are then process of in the front room where it is access by query services such as reporting tools, desktop tools, OLAP and data mining tools. Although ETL processes prove to be an essential component to ensure data integrity in data warehouse, the issue of complexity and scalability plays important role in deciding types of data warehouse architecture. One way to achieve a scalable, non-complex solution is to adopt a hub-and-spoke architecture for the ETL process. According to Evin (2001), ETL best operates in hub-and-spoke architecture because of its flexibility and efficiency. Centralized data warehouse design can influence the maintenance of full access control of ETL processes. ETL processes in hub and spoke data warehouse architecture is recommended in (Inmon, 1999) and (Kimball, 1996). The hub is the data warehouse after processing data from operational database to staging database and the spoke(s) are the data marts for distributing data. Sherman, R (2005) state that hub-and-spoke approach uses one-to-many interfaces from data warehouse to many data marts. One-to-many are simpler to implement, cost effective in a long run and ensure consistent dimensions. Compared to many-to-many approach it is more complicated and costly. 2.2.3 DATA WAREHOUSE FAILURE AND SUCCESS FACTORS Building a data warehouse is indeed a challenging task as data warehouse project inheriting a unique characteristics that may influence the overall reliability and robustness of data warehouse. These factors can be applied during the analysis, design and implementation phases which will ensure a successful data warehouse system. Section 2.2.3.1 focus on factors that influence data warehouse project failure. Section 2.2.3.2 discusses on the success factors which implementing the correct model to support a successful data warehouse project. 2.2.3.1 DATA WAREHOUSE FAILURE FACTORS (Hayen, Rutashobya, and Vetter, 2007) studies shows that implementing a data warehouse project is costly and risky as a data warehouse project can cost over $1 million in the first year. It is estimated that two-thirds of the effort of setting up the data warehouse projects attempt will fail eventually. (Hayen et al, 2007) cited on the work of (Briggs, 2002) and (Vassiliadis, 2004) noticed three factors for the failure of data warehouse project which is environment, project and technical factors as shown in table 2.4. Environment leads to organization changes in term of business, politics, mergers, takeovers and lack of top management support. These include human error, corporate culture, decision making process and poor change management (Watson, 2004) (Hayen et al, 2007). Poor technical knowledge on the requirements of data definitions and data quality from different organization units may cause data warehouse failure. Incompetent and insufficient knowledge on data integration, poor selection on data warehouse model and data warehouse analysis applications may cause huge failure. In spite of heavy investment on hardware, software and people, poor project management factors may lead data warehouse project failure. For example, assigning a project manager that lacks of knowledge and project experience in data warehouse, may cause impediment of quantifying the return on investment (ROI) and achievement of project triple constraint (cost, scope, time). Data ownership and accessibility is a potential factor that may cause data warehouse project failure. This is considered vulnerable issue within the organization that one must not share or acquire someone else data as this considered losing authority on the data (Vassiliadis, 2004). Thus, it emphasis restriction on any departments to declare total ownership of pure clean and error free data that might cause potential problem on ownership of data rights. 2.2.3.2 DATA WAREHOUSE SUCCESS FACTORS (Hwang M.I., 2007) stress that data warehouse implementations are an important area of research and industrial practices but only few researches made an assessment in the critical success factors for data warehouse implementations. He conducted a survey on six data warehouse researchers (Watson Haley, 1997; Chen et al., 2000; Wixom Watson, 2001; Watson et al., 2001; Hwang Cappel, 2002; Shin, 2003) on the success factors in a data warehouse project. He concluded his survey with a list of successful factors which influenced data warehouse implementation as depicted in figure 2.8. He shows eight implementation factors which will directly affect the six selected success variables The above mentioned data warehouse success factors provide an important guideline for implementing a successful data warehouse projects. (Hwang M.I., 2007) studies shows an integrated selection of various factors such as end user participation, top management support, acquisition of quality source data with profound and well-defined business needs plays crucial role in data warehouse implementation. Beside that, other factors that was highlighted by Hayen R.L. (2007) cited on the work of Briggs (2002) and Vassiliadis (2004), Watson (2004) such as project, environment and technical knowledge also influenced data warehouse implementation. Summary In this work on the new proposed model, hub-and-spoke architecture is use as Central repository service, as many scholars including Inmon, Kimball, Evin, Sherman and Nicola adopt to this data warehouse architecture. This approach allows locating the hub (data warehouse) and spokes (data marts) centrally and can be distributed across local or wide area network depending on business requirement. In designing the new proposed model, the hub-and-spoke architecture clearly identifies six important data warehouse components that a data warehouse should have, which includes ETL, Staging Database or operational database store, Data marts, MDDB, OLAP and data mining end users applications such as Data query, reporting, analysis, statistical tools. However, this process may differ from organization to organization. Depending on the ETL setup, some data warehouse may overwrite old data with new data and in some data warehouse may only maintain history and audit trial of all changes of the data. 2.3 ONLINE ANALYTICAL PROCESSING OLAP Council (1997) define OLAP as a group of decision support system that facilitate fast, consistent and interactive access of information that has been reformulate, transformed and summarized from relational dataset mainly from data warehouse into MDDB which allow optimal data retrieval and for performing trend analysis. According to Chaudhuri (1997), Burdick, D. et al. (2006) and Vassiladis, P. (1999), OLAP is important concept for strategic database analysis. OLAP have the ability to analyze large amount of data for the extraction of valuable information. Analytical development can be of business, education or medical sectors. The technologies of data warehouse, OLAP, and analyzing tools support that ability. OLAP enable discovering pattern and relationship contain in business activity by query tons of data from multiple database source systems at one time (Nigel. P., 2008). Processing database information using OLAP required an OLAP server to organize and transformed and builds MDDB. MDDB are then separated by cubes for client OLAP tools to perform data analysis which aim to discover new pattern relationship between the cubes. Some popular OLAP server software programs include Oracle (C), IBM (C) and Microsoft (C). Madeira (2003) supports the fact that OLAP and data warehouse are complementary technology which blends together. Data warehouse stores and manages data while OLAP transforms data warehouse datasets into strategic information. OLAP function ranges from basic navigation and browsing (often known as slice and dice), to calculations and also serious analysis such as time series and complex modelling. As decision-makers implement more advanced OLAP capabilities, they move from basic data access to creation of information and to discovering of new knowledge. 2.3.4 OLAP ARCHITECTURE In comparison to data warehouse which usually based on relational technology, OLAP uses a multidimensional view to aggregate data to provide rapid access to strategic information for analysis. There are three type of OLAP architecture based on the method in which they store multi-dimensional data and perform analysis operations on that dataset (Nigel, P., 2008). The categories are multidimensional OLAP (MOLAP), relational OLAP (ROLAP) and hybrid OLAP (HOLAP). In MOLAP as depicted in Diagram 2.11, datasets are stored and summarized in a multidimensional cube. The MOLAP architecture can perform faster than ROLAP and HOLAP (C). MOLAP cubes designed and build for rapid data retrieval to enhance efficient slicing and dicing operations. MOLAP can perform complex calculations which have been pre-generated after cube creation. MOLAP processing is restricted to initial cube that was created and are not bound to any additional replication of cube. In ROLAP as depict in Diagram 2.12, data and aggregations are stored in relational database tables to provide the OLAP slicing and dicing functionalities. ROLAP are the slowest among the OLAP flavours. ROLAP relies on data manipulating directly in the relational database to give the manifestation of conventional OLAPs slicing and dicing functionality. Basically, each slicing and dicing action is equivalent to adding a WHERE clause in the SQL statement. (C) ROLAP can manage large amounts of data and ROLAP do not have any limitations for data size. ROLAP can influence the intrinsic functionality in a relational database. ROLAP are slow in performance because each ROLAP activity are essentially a SQL query or multiple SQL queries in the relational database. The query time and number of SQL statements executed measures by its complexity of the SQL statements and can be a bottleneck if the underlying dataset size is large. ROLAP essentially depends on SQL statements generation to query the relational database and do not cater all needs which make ROLAP technology conventionally limited by what SQL functionality can offer. (C) HOLAP as depict in Diagram 2.13, combine the technologies of MOLAP and ROLAP. Data are stored in ROLAP relational database tables and the aggregations are stored in MOLAP cube. HOLAP can drill down from multidimensional cube into the underlying relational database data. To acquire summary type of information, HOLAP leverages cube technology for faster performance. Whereas to retrieve detail type of information, HOLAP can drill down from the cube into the underlying relational data. (C) In OLAP architectures (MOLAP, ROLAP and HOLAP), the datasets are stored in a multidimensional format as it involves the creation of multidimensional blocks called data cubes (Harinarayan, 1996). The cube in OLAP architecture may have three axes (dimensions), or more. Each axis (dimension) represents a logical category of data. One axis may for example represent the geographic location of the data, while others may indicate a state of time or a specific school. Each of the categories, which will be described in the following section, can be broken down into successive levels and it is possible to drill up or down between the levels. Cabibo (1997) states that OLAP partitions are normally stored in an OLAP server, with the relational database frequently stored on a separate server from OLAP server. OLAP server must query across the network whenever it needs to access the relational tables to resolve a query. The impact of querying across the network depends on the performance characteristics of the network itself. Even when the relational database is placed on the same server as OLAP server, inter-process calls and the associated context switching are required to retrieve relational data. With a OLAP partition, calls to the relational database, whether local or over the network, do not occur during querying. 2.3.3 OLAP FUNCTIONALITY OLAP functionality offers dynamic multidimensional analysis supporting end users with analytical activities includes calculations and modelling applied across dimensions, trend analysis over time periods, slicing subsets for on-screen viewing, drilling to deeper levels of records (OLAP Council, 1997) OLAP is implemented in a multi-user client/server environment and provide reliably fast response to queries, in spite of database size and complexity. OLAP facilitate the end user integrate enterprise information through relative, customized viewing, analysis of historical and present data in various what-if data model scenario. This is achieved through use of an OLAP Server as depicted in diagram 2.9. OLAP functionality is provided by an OLAP server. OLAP server design and data structure are optimized for fast information retrieval in any course and flexible calculation and transformation of unprocessed data. The OLAP server may either actually carry out the processed multidimensional information to distribute consistent and fast response times to end users, or it may fill its data structures in real time from relational databases, or offer a choice of both. Essentially, OLAP create information in cube form which allows more composite analysis compares to relational database. OLAP analysis techniques employ slice and dice and drilling methods to segregate data into loads of information depending on given parameters. Slice is identifying a single value for one or more variable which is non-subset of multidimensional array. Whereas dice function is application of slice function on more than two dimensions of multidimensional cubes. Drilling function allows end user to traverse between condensed data to most precise data unit as depict in Diagram 2.10. 2.3.5 MULTIDIMENSIONAL DATABASE SCHEMA The base of every data warehouse system is a relational database build using a dimensional model. Dimensional model consists of fact and dimension tables which are described as star schema or snowflake schema (Kimball, 1999). A schema is a collection of database objects, tables, views and indexes (Inmon, 1996). To understand dimensional data modelling, Table 2.10 defines some of the terms commonly used in this type of modelling: In designing data models for data warehouse, the most commonly used schema types are star schema and snowflake schema. In the star schema design, fact table sits in the middle and is connected to other surrounding dimension tables like a star. A star schema can be simple or complex. A simple star consists of one fact table; a complex star can have more than one fact table. Most data warehouses use a star schema to represent the multidimensional data model. The database consists of a single fact table and a single table for each dimension. Each tuple in the fact table consists of a pointer or foreign key to each of the dimensions that provide its multidimensional coordinates, and stores the numeric measures for those coordinates. A tuple consist of a unit of data extracted from cube in a range of member from one or more dimension tables. (C, http://msdn.microsoft.com/en-us/library/aa216769%28SQL.80%29.aspx). Each dimension table consists of columns that correspond to attributes of the dimension. Diagram 2.14 shows an example of a star schema For Medical Informatics System. Star schemas do not explicitly provide support for attribute hierarchies which are not suitable for architecture such as MOLAP which require lots of hierarchies of dimension tables for efficient drilling of datasets. Snowflake schemas provide a refinement of star schemas where the dimensional hierarchy is explicitly represented by normalizing the dimension tables, as shown in Diagram 2.15. The main advantage of the snowflake schema is the improvement in query performance due to minimized disk storage requirements and joining smaller lookup tables. The main disadvantage of the snowflake schema is the additional maintenance efforts needed due to the increase number of lookup tables. (C) Levene. M (2003) stresses that in addition to the fact and dimension tables, data warehouses store selected summary tables containing pre-aggregated data. In the simplest cases, the pre-aggregated data corresponds to aggregating the fact table on one or more selected dimensions. Such pre-aggregated summary data can be represented in the database in at least two ways. Whether to use star or a snowflake mainly depends on business needs. 2.3.2 OLAP Evaluation As OLAP technology taking prominent place in data warehouse industry, there should be a suitable assessment tool to evaluate it. E.F. Codd not only invented OLAP but also provided a set of procedures which are known as the Twelve Rules for OLAP product ability assessment which include data manipulation, unlimited dimensions and aggregation levels and flexible reporting as shown in Table 2.8 (Codd, 1993): Codd twelve rules of OLAP provide us an essential tool to verify the OLAP functions and OLAP models used are able to produce desired result. Berson, A. (2001) stressed that a good OLAP system should also support a complete database management tools as a utility for integrated centralized tool to permit database management to perform distribution of databases within the enterprise. OLAP ability to perform drilling mechanism within the MDDB allows the functionality of drill down right to the source or root of the detail record level. This implies that OLAP tool permit a smooth changeover from the MDDB to the detail record level of the source relational database. OLAP systems also must support incremental database refreshes. This is an important feature as to prevent stability issues on operations and usability problems when the size of the database increases. 2.3.1 OLTP and OLAP The design of OLAP for multidimensional cube is entirely different compare to OLTP for database. OLTP is implemented into relational database to support daily processing in an organization. OLTP system main function is to capture data into computers. OLTP allow effective data manipulation and storage of data for daily operational resulting in huge quantity of transactional data. Organisations build multiple OLTP systems to handle huge quantities of daily operations transactional data can in short period of time. OLAP is designed for data access and analysis to support managerial user strategic decision making process. OLAP technology focuses on aggregating datasets into multidimensional view without hindering the system performance. According to Han, J. (2001), states OLTP systems as Customer oriented and OLAP is a market oriented. He summarized major differences between OLTP and OLAP system based on 17 key criteria as shown in table 2.7. It is complicated to merge OLAP and OLTP into one centralized database system. The dimensional data design model used in OLAP is much more effective for querying than the relational database query used in OLTP system. OLAP may use one central database as data source and OLTP used different data source from different database sites. The dimensional design of OLAP is not suitable for OLTP system, mainly due to redundancy and the loss of referential integrity of the data. Organization chooses to have two separate information systems, one OLTP and one OLAP system (Poe, V., 1997). We can conclude that the purpose of OLTP systems is to get data into computers, whereas the purpose of OLAP is to get data or information out of computers. 2.4 DATA MINING Many data mining scholars (Fayyad, 1998; Freitas, 2002; Han, J. et. al., 1996; Frawley, 1992) have defined data mining as discovering hidden patterns from historical datasets by using pattern recognition as it involves searching for specific, unknown information in a database. Chung, H. (1999) and Fayyad et al (1996) referred data mining as a step of knowledge discovery in database and it is the process of analyzing data and extracts knowledge from a large database also known as data warehouse (Han, J., 2000) and making it into useful information. Freitas (2002) and Fayyad (1996) have recognized the advantageous tool of data mining for extracting knowledge from a da
Tuesday, September 3, 2019
How Nuclear Power Works :: essays research papers
How Nuclear Power Works Nuclear power plants provide about 17 percent of the world's electricity. Some countries depend more on nuclear power for electricity than others. In France, for instance, about 75 percent of the electricity is generated from nuclear power, according to the International Atomic Energy Agency. In the United States, nuclear power supplies about 15 percent of the electricity overall, but some states get more power from nuclear plants than others. There are more than 400 nuclear power plants around the world, with more than 100 in the United States. The dome-shaped containment building at the Shearon Harris Nuclear Power Plant near Raleigh, NC Have you ever wondered how a nuclear power plant works or how safe nuclear power is? In this article, we will examine how a nuclear reactor and a power plant work. We'll explain nuclear fission and give you a view inside a nuclear reactor. Uranium Uranium is a fairly common element on Earth, incorporated into the planet during the planet's formation. Uranium is originally formed in stars. Old stars exploded, and the dust from these shattered stars aggregated together to form our planet. Uranium-238 (U-238) has an extremely long half-life> (4.5 billion years), and therefore is still present in fairly large quantities. U-238 makes up 99 percent of the uranium on the planet. U-235 makes up about 0.7 percent of the remaining uranium found naturally, while U-234 is even more rare and is formed by the decay of U-238. (Uranium-238 goes through many stages or alpha and beta decay to form a stable isotope of lead, and U-234 is one link in that chain.) Uranium-235 has an interesting property that makes it useful for both nuclear power production and for nuclear bomb production. U-235 decays naturally, just as U-238 does, by alpha radiation. U-235 also undergoes spontaneous fission a small percentage of the time. However, U-235 is one of the few materials that can undergo induced fission. If a free neutron runs into a U-235 nucleus, the nucleus will absorb the neutron without hesitation, become unstable and split immediately. See How Nuclear Radiation Works for complete details. Nuclear Fission The animation below shows a uranium-235 nucleus with a neutron approaching from the top. As soon as the nucleus captures the neutron, it splits into two lighter atoms and throws off two or three new neutrons (the number of ejected neutrons depends on how the U-235 atom happens to split).
Monday, September 2, 2019
Rebutting Arguments to Legalize Euthanasia or Assisted Suicide Essay
Rebutting Arguments to Legalize Euthanasia or Assisted Suicide à à à This essay focuses on several of the most common arguments in favor of the legalization of euthanasia or assisted suicide - and rebuts them. The language is simple, or, as they say, in layman's terms so as to be easily understandable. The sources are from professional journals, internet websites, and news outlets. à The first common argument favoring euthanasia or assisted suicide is this: "Since euthanasia and assisted suicide take place anyway, isn't it better to legalize them so they'll be practiced under careful guidelines and so that doctors will have to report these activities?" That sounds good but it doesn't work. Physicians who do not follow the "guidelines" will not report and, even when a physician does report information, there is no way to know if it is accurate or complete. For example, the Oregon law requires the Oregon Health Division (OHD) to collect information and publish an annual statistical report about assisted suicide deaths.(Oregon) However, the law contains no penalties for health care providers who fail to report information to the OHD. Moreover, the OHD has no regulatory authority or resources to ensure submission of information to its office.(Prager) Thus, all information contained in the OHD's official reports is that which has been provided by the physicians who pr escribed the lethal drugs and only that which the physicians choose to provide. à The OHD even admitted that reporting physicians may have fabricated their versions of the circumstances surrounding the prescriptions written for patients. "For that matter, the entire account could have been a cock-and-bull story. We assume, however, that physicians wer... ...19, conducted by Hebert Research, October 31, 1991, and within one week following the November 5, 1991 vote. Five days before the vote only 9.7 percent of those opposing the measure cited religious reasons for their opposition. Following the measure's defeat, individuals who had previously indicated support for Initiative 119 were again surveyed. Of these previous supporters, 15 percent subsequently opposed the initiative. Religious reasons accounted for only 6.1 percent of this eventual opposition. à Transcript from audio tape of "On Target," WVON Radio (Chicago). Debate between Rita Marker and T. Patrick Hill, September 26, 1993. à Van der Wal,G. P. J. van der Maas, J. M. Bosma, et al., "Evaluation of the notification procedure for physician-assisted deaths in the Netherlands," 335 New England Journal of Medicine (November 28, 1996), p. 1706.
A Financial Analysis of G.Wilson
Construction is a cyclical business. During economic booms, both individuals and corporations tend to build too much and too quickly. Profit-seeking entities, anxious not to miss out on the economic potential of the boom, push up the demand for both construction materials and labor, which then increases the prices of those variables.In time, and with more and more infrastructure erected, an excess supply develops. When the economy suddenly turns downward, this excess supply, finding no demand, then pushes prices of related industry products downward.G. Wilson and Its Erratic EarningsG.Wilson is an example of a company that finds it hard to produce consistent earnings. In one sense it is inevitable for a company that is completely devoted to the production of construction materials to have cyclical earnings. While it has a solid balance sheet, G.Wilson is simply too vulnerable to the boom and bust cycles of the construction industry to realize stable and lasting profits.However, a cer tain level of innovation can help insulate the company from these systemic shocks, with one example being Mr. Monroeââ¬â¢s proposal of direct costing. By changing how the company estimated its costs for the production and sale of rebar, Mr. Monroe was in effect bringing a modicum of both clarity and stability into the earnings picture.à With the direct costing method, the price arrived at for the rebar was more precise, in contrast to the old method which used industry-approved, but inaccurately determined fixed costs, including items such as overhead. In this specific instance it was determined that out-of-pocket expenses for a ton of rebar averaged at $406, but fixed costs remained more or less constant, so that profits earned or losses realized depended on the amount of tonnage actually sold and shipped.The proposal to ââ¬Å"add tonnage in the proposed job to the backlog for the month in which it is to be producedâ⬠was meant to produce a method by which a more prà ©c is costing could be arrived at, especially in relation to the fixed costs involved.When it came to selling the rebar to the contractors, the more precise costing would allow the company to see immediately which deals were going to produce a profit and which were not, thereby avoiding bad deals in the first place. Without this more precise costing, the company might enter into deals that would make little economic sense, and be saddled with costs that it will in essence pay for in future production.Ã
Sunday, September 1, 2019
Problems of disaster management Essay
Prediction, warning, and evacuation systems that depend on refined technology and extremely effective public bureaucracies are above all open to question. In addition, disasters hold features that have not been common in smaller communities and that might raise completely new problems of disaster management. For instance disaster impacts that control mass media markets are probable to be extensively, incessantly, and obsessively reported whereas impacts on other communities that have less right to use to these channels are likely to be ignored. The consequences for skewing post-disaster assistance are considerable. Secondly, the multifarious societal mixes pose new problems for the delivery of emergency response services and disaster relief; linguistic, ethnic, and other divergences are often marked in such places. Thirdly, the sheer size and complication of infrastructure networks make them predominantly liable to distraction. Finally, recovery is apt to transpire more slowly than in smaller places. In short, past lessons of disaster management might no longer be applicable in the cities of the polycentric. Certainly, the majority of the worldââ¬â¢s big cities are not part of the polycentric. Instead they serve as primary contact points linking the polycentre and regional or local markets on the global periphery. Tijuana (Mexico) is a good instance. Once a small regional town, it is now the fourth-largest city in Mexico with a populace of well over 1 million. Tijuanaââ¬â¢s recent growth has been fuelled by investments of multinational corporations in maquiladora firms near the US border. As more shanty towns group in the steep semi-arid valleys of the city edge and more people crowd into the waterside lowlands, the incidence and harshness of floods and landslides in Tijuana are also speeding up. In places such as Manila, Dhaka, Ankara, or Lima there is the prospective for heavy loss of life during disasters as well as appalling material destruction. The situation in Lima is typical. This is a city that has endured severe earthquakes as a minimum five times in the past three hundred years. At the end of the Second World War, just over half a million people lived in the metropolitan area. these days, there are more than five million. Vast numbers of poor rural peasants have infested into Lima. Not all groups are equally exposed to hazard. Certainly, the pattern of hazard-susceptibility is a complex one that has developed in response to changes in demography, economics, land ownership, building practices, and other features. Read more:à Sharing Responsibility During Disaster Management Middle and upper-income groups live in well-constructed houses that often conform to antiseismic codes and are sited in neighbourhoods with broad streets and ample open spaces. If distressed by an earthquake there are enough resources to make certain quick recovery. The marginal shanty towns (pueblos jovenes) are also low-density settlements, this time poised of light bamboo structures that do not disintegrate when the ground moves. People are poor, but stages of social organization are high. On the contrary, seismic susceptibility is high in the inner-city slum areas. Here numerous poor families are crowded into old adobe brick structures, adjacent streets are narrow, and open spaces are non-existent. There are few neighbourhood organizations or other local institutes that might be called on in the event of a disaster. Here earthquake protection measures are nominal or, more often, non-existent. As summarized by one observer, the situation is full of desolate prospects: The population of critical areas would not choose to live there if they had any substitute, nor do they neglect the maintenance of their stuffed and deteriorated tenements. For them it is the best-of-the-worst of a number of disaster-prone situations such as having nowhere to live, having no way of earning a living and having not anything to eat. Given that these other risks have to be faced on a daily basis, it is hardly surprising that people give little precedence to the risk of destruction by earthquake. (Maskrey, 1989, p. 12) In summary, there is a high extent of uncertainty about the future of cities. Their growth seems certain, but at what density? New ones might spring up in unexpected places under the influence of changing geo-economicsââ¬â¢ forces.ever more similar in outward form, cities in diverse cultures and continents may still hold peculiarly different internal structures. The divisions between rich cities and poor ones might become wider and their disaster receptiveness may also diverge. But, at the similar time, the differences between all cities and their rural hinterlands might become sharper. It would be reckless to assume that the disaster-susceptibility of any one city will be quite like that of any other. This is an era of great urban instability; it bears close examination of hazards and disasters.
Subscribe to:
Posts (Atom)