Saturday, November 9, 2019

Truck Driving: How to Do It

How many of us can drive a truck? Most people wouldn’t know where to start. All they see is a huge chunk of metal with wheels and can’t fathom driving it. Using this guide will give a crash course (no pun intended) in truck driving and what it’s like on the road with these monsters. It also may help one understand the world of the truck driver and give them a new found respect for them. One of the first steps every truck driver must do before setting out on a trip is the pre-trip inspection. On some tractor-trailers there are as much as one hundred items that need to be checked or replaced. Some of these items include wheels and tires, brakes, lights, and fuel containers. All of which can be a major hazard to other drivers if they are not fixed or properly secured. Another equally important step for a truck driver is making sure the cargo is properly stowed or locked down. Depending on the trailer type there are various ways to properly secure the cargo, such as straps, chains, and wedge blocks. If the cargo isn’t properly secured it can shift and cause the truck to be overweight in a certain area. Being overweight on a certain side or on a certain axel can have a devastating effect on the road or on the truck itself. Federal dictates the no tractor-trailer can exceed 80,000 lbs without proper paperwork. Another area that needs to be checked thoroughly is the inside of the cab and the dash instruments. Most trucks have three times as many gauges as a pedestrian vehicle. Some that you would not find in a car is the air (psi), exhaust temp. , turbo pressure, and oil temperature. It is very important that we keep a check on all the various fluids the truck needs as well. Most trucks will not crank if a fluid level is to low such as the water. Once out on the road a truck driver has to be extremely cautious and aware of what is happening on the road. One of the most overlooked and easiest things to do is watching the mirrors very often. The mirrors are there to help you drive safely and efficiently. It is a very important step to master for the safety of ourselves and the other drivers. There are many more steps and regulations to go along with truck driving. Each state has its own rules and regulations as well as certain cities. Most of which can be viewed on the state DMV website. I hope this essay helps anyone looking to become a truck driver, and remember, keep an eye on those mirrors.

Thursday, November 7, 2019

The State of Texas Academic Readiness

The State of Texas Academic Readiness Introduction The state of Texas has had a statewide program for academic assessment since 1979 (Keating 562). The State of Texas Assessment of Academic Readiness (STAAR) was enacted in 2007, and the implementation started in 2011. The main aim of the STAAR is to appraise the knowledge of students and skills.Advertising We will write a custom research paper sample on The State of Texas Academic Readiness specifically for you for only $16.05 $11/page Learn More The scope of the STAAR is to test students in grades three through to eight in reading, math, science, and social studies (Guisbond, Neil and Schaeffer 35). In addition, it includes assessments carried at the end of the course and taken between grades 9 and 12. The STAAR has experience mixed reactions in Texas. Proponents argue that it provides the best alternative to measure knowledge and skills of students. On the other hand, there are counter arguments that the program causes mental and emotional str ain on the key stakeholders i.e. the students, parents and teachers (Dutro and Selland 342). This paper analyzes the issues facing the program based on the counter argument. Measuring Knowledge using STAAR The STAAR is designed to ensure that a student passes a minimum of 11 end-of-course exams (EOC) in order to graduate. The students must achieve a minimum cumulative score on all the given exams. According to Dutro and Selland students are supposed to pass all EOC assessments in order to graduate from the high school (347). The EOC accounts for 15% of the students’ grade. Thus, a failure in an EOC may bar a student graduating from high school. According to Featherston, the STAAR places a lot of emphasis on testing, this negates the teaching to acquire basic skills and knowledge (3). Teachers have resulted in teaching towards the tests. In addition, the students are more anxious about the tests due to the time pressures and formalities that relate to STAAR that end up placing strain on them to perform well. Thus, the students may fail due to the pressure or pass with flying colors due to the fear of the consequences. This is due to the fact that the current standardized testing is based on ‘test and punish policies’ (Weiss par. 3). While substantiating the claims, Warren and Grodsky stated that STAAR does not measure the knowledge of students (647). For example, a study conducted by Dutro and Selland, established that majority of tests contain information that is hard to understand and thus, the tests do not properly assess skills and knowledge (357). In the study by Dutro and Selland, a student exclaimed that â€Å"I was finally happy when I could read chapter books, but I know I’m not good at it. I do badly on those tests. When we take them, I just know it will be another low point, so the books I like, I know they are too low for those tests† (359).This is a pointer that the STAAR has not been able to evaluate the studentsà ¢â‚¬â„¢ knowledge.Advertising Looking for research paper on education? Let's see if we can help you! Get your first paper with 15% OFF Learn More Warren and Grodsky added that the tests under STAAR are too many and thus causes the students to lose interest in the learning process (648). Cramming of the content being taught has replaced the basic learning process, which builds the students critical thinking skills. This is contrary to basic the teaching process in which testing is supposed to link the learning materials and the student in order to promote critical thinking and understanding the concept of issues being taught. However the advocates of the STAAR state the standardized testing is the only viable option to assess students’ knowledge and skills. This is because some teachers and parents trust that STAAR determines the academic situation of students in relation to writing and reading. Featherston added that the STAAR develops and administers tests that assess students’ knowledge against the set standards for learning (4). As a result, this ensures that all students have the required proficiencies in knowledge and skills and that as they progress to the next grade they are not falsely promoted. Preparedness of Teachers to Give Students what they need for STAAR The state of Texas has developed new curriculum for subjects such as World Geography and Biology. The standards for the new curriculum have been adopted by the State Board of Education (Guisbond, Neil and Schaeffer 36). However, the state of Texas has not provided materials required to boost the program. The teachers lack textbooks that promote the new curriculum standards. Dutro and Selland stated that the STARR has shifted the focus of teachers from guiding students to gain critical thinking skills that students require for their college and future careers to ‘teaching to the test’ (360). Furthermore, the EOC evaluation tests are normally writ ten in a complex language, three Lexile levels. This implies that the student may be aware of the subject matter but may not understand the tests because they are written in language that is higher than their normal grade. In the support that teachers are not giving the students what they need to prepare for STAAR, a university professor, Walter Stroup posited that STAAR is about how well the students are at doing the standardized tests (Weiss par. 1). Thus, teachers do not teach the right content. However, proponents argue that STAAR has led to teachers proving the students with the content they require in order to prepare them for college and future careers. For instance, it has led to a shift from grade-based to course-based assessments. In addition, the standardized tests have revolutionized the teaching process because teachers can link performance standards to external evidence of postsecondary readiness. Thus, there is the need to device other measures to evaluate the educato rs and students in order to improve academic performance and equity.Advertising We will write a custom research paper sample on The State of Texas Academic Readiness specifically for you for only $16.05 $11/page Learn More Conclusion Based on the arguments, it can be generalized that STAAR has increased the test workload for teachers, students, and the parents. The emphasis on the standardized testing has led to teachers aligning the teaching strategies to ‘teaching to the test’. This has negated the principle that education is about learning and understanding. However, it is worth noting that STAAR is not entirely terrible. The lack of proper standardized tests, teachers and parents will not know the academic progress and preparedness of students to move to the next level of education. Thus, there is the need for test standards that uphold holistic learning of students and that motivate teachers rather than pressurizing them. The tests should be designed in relation to understanding the implications of standardized testing and their effects on students’ mental and emotional wellbeing. The tests should be designed in a manner that incorporates the empowerment ability in which teachers are provided with learning materials to prepare students to achieve the set standards. Dutro, Elizabeth, and Makenzie Selland. â€Å"I Like to Read, but I Know I’m Not Good at It†: Children’s Perspectives on High†Stakes Testing in a High†Poverty School. Curriculum Inquiry 42.3 (2012): 340-367. Print. Featherston, Mark. High-stakes testing policy in Texas: Describing the attitudes of young college graduates, Texas: Texas State University, 2011. Print. Guisbond, Lisa, Monty Neill, and Bob Schaeffer. Resistance to high-stakes testing to spreads. District Administration 48.8 (2012): 34-42. Print. Keating, Daniel. â€Å"Formative evaluation of the Early Development Instrument: Progress and prospects.† Early Education and Development 18.3 (2007): 561-570. Print.Advertising Looking for research paper on education? Let's see if we can help you! Get your first paper with 15% OFF Learn More Warren, John, and Eric Grodsky. Exit exams harm students who fail them and dont benefit students who pass them. Phi Delta Kappan 90.9 (2009): 645-649. Print. Weiss, Jeffrey. Texas’ standardized tests a poor measure of what students learned, UT- Austin professor says. 11 Aug. 2012. Web.

Tuesday, November 5, 2019

The Scales of Atmospheric Motion

The Scales of Atmospheric Motion The atmosphere is always in motion. Each of its swirls and circulations is known to us by name- a gust of wind, a thunderstorm, or a hurricane- but those names tell us nothing about its size. For that, we have weather scales. Weather scales group weather phenomena according to their size (the horizontal distance they span) and how long of a lifespan they have. In order from largest to smallest, these scales include the planetary, synoptic, and mesoscale. Planetary Scale Weather Planetary or global scale weather features are the largest and longest-lived. As their name suggests, they generally span tens of thousands of kilometers in size, extending from one end of the globe to another. They last weeks or longer. Examples of planetary-scale phenomena include: Semi-permanent pressure centers (the Aleutian Low, Bermuda High, Polar Vortex)The westerlies and trade winds Synoptic or Large Scale Weather Spanning somewhat smaller, yet large distances of a few hundred to several thousand kilometers, are synoptic scale weather systems. Synoptic scale weather features include those having lifetimes of a few days to a week or more, such as: Air massesHigh pressure systemsLow pressure systemsMid-latitude and extratropical cyclones (cyclones that occur outside of the tropics)Tropical cyclones, hurricanes, typhoons. Derived from the Greek word which means seen together, synoptic can also mean an overall view. Synoptic meteorology, then, deals with viewing a variety of large scale weather variables over a wide area at a common time. Doing this gives you a comprehensive and nearly instantaneous picture of the state of the atmosphere. If youre thinking this sounds an awful lot like a weather map, youre right! Weather maps are synoptic. Synoptic meteorology uses weather maps to analyze and predict large-scale weather patterns. So each time you watch your local weather forecast, you are seeing synoptic scale meteorology! Synoptic times displayed on weather maps are known as Z time or UTC. Mesoscale Meteorology Weather phenomena that are small in size- too small to be shown on a weather map- are referred to as mesoscale. Mesoscale events range from a few kilometers to several hundred kilometers in size. They last a day or less, and impact areas on a regional and local scale and include events such as: ThunderstormsTornadoesWeather frontsSea and land breezes Mesoscale meteorology deals with the study of these things and how the topography of a region modifies weather conditions to create mesoscale weather features. Mesoscale meteorology can be further divided into microscale events. Even smaller than mesoscale weather events are microscale events, which are smaller than 1 kilometer in size and very short-lived, lasting minutes only. Microscale events, which include things like turbulence and dust devils, dont do much to our daily weather.

Sunday, November 3, 2019

Biomes and Diversity Assignment Example | Topics and Well Written Essays - 250 words

Biomes and Diversity - Assignment Example This big shift with the invention of farm implements and tools enabled Man to vastly increase his food supplies, stabilize food sources, made food production a secure and predictable undertaking and this incidentally also allowed the arable land to support a much higher population density. Increased food availability made the entire human population grow exponentially. It has also put pressure on the other species of plants and animals, as there is a growing competition for the available food, space, and other requirements for life. Ever since Man burst unto the scene, so to speak, a good number of species had become extinct due mainly to Mans prolific activities. It is a dangerous development, as biodiversity is necessary for ensuring survival of the remaining species. There are strong ancestor-descendant links between various species and their biomes, so the main concerns should be both conservation (wise use) and preservation (leaving untouched). The past century saw the extinction of about 100 species of birds, mammals, and amphibians (Hassan & Scholes, 2005, p. 105) but this background (natural) extinction rate is expected to be 10,000 higher in the next two centuries if based on ancient fossil records, current trends, and computer modeling of extinction rates (M iller & Spoolman, 2011, p. 191). The loss in genetic diversity becomes a serious threat to Mankinds survival as well, because of the links that was mentioned earlier. There are still many undocumented species, in addition to those already well known, which can provide ecological, economic, and medicinal benefits to Man. People can help to slow down the extinction rate by avoiding environmental degradation, reducing their carbon footprint, minimize pollution, mitigate climate change, refrain from introducing invasive or harmful species to a biome, prevent over-exploitation of open common

Thursday, October 31, 2019

Money and banking Essay Example | Topics and Well Written Essays - 500 words - 2

Money and banking - Essay Example Jefferson argues that since no mention of any mandate was present, Congress had no such right. Hamilton dismissed Jefferson’s arguments by citing that Congress has â€Å"necessary and proper powers† to implement the nation’s fiscal and monetary policy. He added that a central bank fits perfectly into this scheme, by making it easier for Congress to do the job. If there were one central bank coordinating all banks, Congress could easily hold one accountable. Eventually, Hamilton’s arguments won, and this would set the practice of establishing central banks for the years to come, beginning with the First Bank of the United States of America. (Johnson 7) This should be viewed actually as the triumph of the power of money over democracy. Money could be represented by paper marked by the government as legal tender. In itself, it is harmless to democracy. But left to the hands of unscrupulous individuals and bankers, money can be used to damage democracy as can be seen in the succeeding events. During the term of James Madison, the bill seeking to renew the First Bank’s Charter was defeated by a narrow margin. Madison liked the outcome, but chaos ensued. The War of 1812 made the US Government to focus its effort in surviving against England. As a result, state-chartered banks began issuing different fiat currencies with little value. Proponents of central banking then blamed Madison for such troubles. Near the end of his term, Madison was forced to sign the charter of the Second Bank of America, as this was the popular clamor of Representatives. (Johnson 9) Thus, although there were hopes that democracy will prevail over the system of credit, central banking won. This episode illustrates clearly the fact that because of money, efforts to implement what is good for the general public can be undermined. Fast forward to 1907,

Tuesday, October 29, 2019

Law of contract Case Study Example | Topics and Well Written Essays - 2000 words

Law of contract - Case Study Example In this scenario,there are two questions which arise.First,is the price of 100 listed in the newspaper advertisement binding on Wedding Heaven in the event that they sell the dress.Secondly,does the delay of John cause him in law to have accepted the contract offer of the lower amount of 150In order to give effective analysis to this question it is important to look at relevant Irish case law on this issue in order to determine whether or not such actions constitute a binding contract enforceable in law. There are a number of leading cases in both the Irish jurisdiction and other common law jurisdictions, notably England, which need to be assessed in order to consider this question.This essay shall first analyse the formative components which are necessary for the formation of a contract. Secondly, after assessing the relevant law, these principles will be applied to the current scenario above. Finally, and in conclusion, this paper shall decide whether or not a claim exists in contr act law in the scenario again either Wedding Heaven, or John the DJ. We now turn to the basic contract law principles which currently exist in Ireland.First, we must look at the relevant contract law principles on order to ascertain the current state of the law in Ireland. There are a number of requirements necessary for the formation of a valid contract. This includes offer and acceptance, an intention to create legal relations; and finally consideration. It is the first two elements which this paper shall concentrate on. Offer It is important at the outset to distinguish between and offer and a mere invitation to treat. An offer is when the seller sets out in certain terms what they propose to sell to the potential buyer. In essence, it is the final set of terms which, if accepted by the buyer, would create a valid contract. However, an invitation to treat is not a formal offer, but rather an indication of intent to enter negotiations. It is not possible to accept a mere invitation to treat in order to create a binding contract. Therefore it is important to ascertain the exact intent of any representation as to whether it is a formal offer or simply a declaration of intent. Such declarations may be considered as offers under statute1 or common law2. In general, advertisements are considered to be an invitation to treat. In the English case of C.A. Norgren Co. v Tech-nomarketing,3 Walton J refused a committal order against one of the defendants for allegedly breaching an undertaking given to the High Court that the defendants would not "make, offer for sale, sell or distribute" items that were subject to copyright.4 The defendants distributed a price list and brochure, including an item covered by the undertaking. Walton J. upheld the contention of the defendant that, generally, the distribution of advertising material constituted an invitation to treat and was therefore not an offer. In order to gauge the intention of the seller, this can either be express by way of direct words, or implied by his actions. It has previously been held in case law that a personal quotation of the price of goods was merely an invitation to treat.5 Further, it has also been held that a display of goods for sale with the price labels attached is in all probability only an invitation to treat, whether the products are in a shop window, on a store shelf or indeed in a self-service store6. One of the leading cases is that of Fisher v Bell7 where a shopkeeper displayed in a knife with a price ticket in his shop window. He was charged with offering a flick knife for sale in contravention of the Restriction of Offensive Weapons Act 1959 s1. It was however held that the shopkeeper was not guilty because displaying the knife in the shop window amounted merely to an invitation to treat. Accordingly, the shopkeeper had not offered the knife for sale within the 1959 Act. Further, In the leading English case of Pharmaceutical Society of Great Britain

Sunday, October 27, 2019

The Classic Transportation Problem Computer Science Essay

The Classic Transportation Problem Computer Science Essay Classic Transportation Problem is a significant research issue in spatial data analysis and Network analysis in GIS; it helps to answer problems which relate in matching the supply and demand via set of objectives and constraints. The objective is to determine a set of origins and destinations for the supply so as to minimize the total cost. Geographic Information System (GIS) is an intelligent tool which combines characteristic data and spatial features and deal with the relationship connecting them. Although GIS application is extensively utilized in numerous activities, but in transportation its application is still rare. Basically, GIS is an information system which focusing on few factors which included the input, management, analysis and reporting of geographic (spatially related) information. Between all the prospective applications that GIS can be use for, issues on transportation have gained a lot of interest. An exact division of GIS related to issues on transportation has surfaced, which labelled as GIS-T. The Hitchcock transportation dilemma is conceivably one of the most solved linear programming problems in existence (Saul I. Gass, 1990). The addition of GIS into transportation (GIS-T) suggests that it is possible to integrate transportation data into GIS. Many research scholars have discussed computational considerations for solving the Classic Transportation problem (CTP): Shafaat and Goyal developed a procedure for ensuring an improved solution for a problem with a single degenerate basic feasible solution; Ramakrishnan described a variation of Vogels approximation method (VAM) for finding a first feasible solution to the CTP; and Arsham and Kahn described a new algorithm for solving the CTP. According to Brandley, Brown and Craves, 2004, practically the CTP is integrated in all texts on management science or operations management. In classic problem relating to transportation, particular objective for instance minimum cost or maximum profit will be the focus to integrate the GIS and the transportation data available. For example, (Jaryaraman and Prikul, 2001), (Jaryaraman and Ross, 2003), (Yan et al., 2003), (Syam, 2002), (Syarif et al., 2002), (Amiri, 2004), (Gen and Syarif, 2005), and (Trouong and Azadivar, 2005) had consider total cost of supply chain as an objective function in their studies. Nevertheless, there are no design tasks that are single objective problems. In this chapter, we present an in-depth computational comparison of the basic solution algorithms for solving the CTP. We will describe what we know with respect to solving CTPs in practice and offer comments on various aspects of CTP methodologies and on the reporting of computational results. In order to describe the core elements of the GIS transport model that is used to gain the solution to the CTP, it is essential to go over the different types of transportation models briefly, and elaborate on the application and issues of GIS in transportation. The chapter concludes with some final remarks. The Classic Transportation Problem (CTP) The Classic Transportation Problem (CTP) refers to a special class of linear programming. It has been recognized as a fundamental network problem. The Classic transportation problem of linear programming has an early history that can be traced to the work of Kantorovich, Hitchcock, Koopmans and Dantzig. By applying directly the simplex method to the standard linear-programming problem, it actually helps to solve it. Still, because of its very unique mathematical structure, it was acknowledged early that the simplex method applied to the CTP can be quite efficient on how to estimate the needed simplex-method information variable to enter the basis, variable to leave the basis and optimality conditions. Many practical transportation and distribution problems such as the fixed cost transportation, the minimum with fixed charge in logistics can be formulated as CTP. Mathematical formulation of the CTP There have been numerous studies conducted that focusing on new models or methods to verify the transportation or the logistics activities that can offer the least cost (Gen and Chen, 1997). Generally, logistics was defined as the quality of a flow of materials, such as the frequency of departure (number per unit time, adherence to the transportation time schedule and so on (Tilaus et al, 1997). Products can be assemble and sent to the allocation centres, vendors or plants. Hitchcock, 1941 has initiated the earliest formulation of a planar transportation model, which used to find an approach to transport homogeneous products from several resources to several locations so that the total cost can be minimized. According to Jung-Bok Jo, Byung -Ki Kim and Ryul Kim, 2008, the development of a variety of deterministic and / or stochastic models have been increased throughout the past several decades. The basic problem sometimes called the general or Hitchcock transportation problem can be known in a mathematic way as follows: Where m is the number of supply centres and n is the number of demand points. This is subjected to: Without loss of generality, it is assumed that, the problem is balanced, i.e. Total Demand = Total Supply Where; ai, bj, cij, xij à ¢Ã¢â‚¬ °Ã‚ ¥ 0 (non negativity constants) à ¢Ã¢â€š ¬Ã‚ ¦2.4 All the parameters are free to take non negative real values. The ais are called supplies and the bis are called demands. For our discussion here, we also assume that the costs cij à ¢Ã¢â‚¬ °Ã‚ ¥ 0. A number of heuristic methods to solve the classic transportation problem have been proposed. (Gottieb et el., 1998; Sun et al., 1998; Adlakha and Kowalski, 2003; Ida et al., 2004). According to Chan and Chung, 2004, in order to distribute problem in a demand driven SCN, they have suggested a multi- objective genetic optimization. They also measured minimization of total cost of the system, total delivery days and the equity of the capacity utilization ratio for manufacturers as objectives. Meanwhile, Erol and Ferrel, 2004, have recommended a model that assigned suppliers to warehouses and warehouses to customers. In addition, the SCN design problem was formulated as a multi- objective stochastic mixed inter linear programming model, which then was resolved by a control method, and branch and bound techniques (Guillen et al., 2005). Chan et al., 2004, stated that objectives were SC profit over the time horizon and customer satisfaction level and they also developed a hybrid approach regarding to genetic algorithm and Analytical Hierarch Process (AHP) for production and distribution problems in multi-factory supply chain models. Jung-Bok Jo, Byung -Ki Kim and Ryul Kim, 2008, has measured few objectives in their research namely; operation cost, service level, and resources utilization. In this project, it has been considered about the integration of the CTP into the GIS environment, which little or no research has been done into this line of study. Our formulation will be particularly concentrated on the use of several GIS software and procedures to see how the CTP problem can be solved in the GIS environment. In that note and as already stated in chapter one, in trying to integrate CTP into the GIS environment, two of the algorithm explained in this literature review will be used to solved the CTP problem to get the initial basic feasible solutions and one optimal solution method will be used to get the optimal solution that will be integrated into the GIS software environment to solve the CTP problem. 2.4 Methods of solving Transportation problems The practical importance of determining the efficiency of alternative ways for solving transportation problems is affirmed not only because of the sizeable fraction of the linear programming literature has been dedicated to these problems, but also by the fact that an even larger allocation of the concrete industrial and military appliances of linear programming deal with transportation problem. Transportation problems often occur as sub-problems in a bigger problem. Moreover, industrial applications of transportation problems often contain thousands of variables, and hence a streamlined algorithm is not computationally worthwhile but a practical necessity. In addition, many of linear programs that occurred can nevertheless be given a transportation problem formulation and it is also possible to approximate certain additional linear programming problems by such a formulation. Efficient algorithms existed for the solution of transportation. A computational study done by Glover et al. suggested that the fastest method for solving Classic transportation problems is a specialization of the primal simplex method due to Glover et al. Using data structured due to M.A. Forbes, J.N. Holt, A.M Watts, 1994. An implementation of this approached, is capable of handling the general transshipment problem. The method is particularly suitable for large, spares problems where the number of arcs is a small multiple of the number of nodes. Even for dense problems the method is considered to be competitive with other algorithms (M.A. Forbes, J.N. Holt, A.M Watts, 1994). Another consideration of the CTP model is the formulation made by Dantzigs, which is adaptation of the simplex method to the CTP as the primal simplex transportation method (PSTM). This method is known as the method-modified distribution method (MODI); it has also been acknowledged as the row-column sum method (A.Charnes and W. W. Cooper, 1954). Subsequently, another method calledthe stepping-stone method (SSM) has been developed by Charnes and Cooper which gives an option of determining the simplex-method information. According to the paper written by Charnes and Cooper which is entitled The stepping stone method of explaining linear programming calculations in transportation problems. The SSM is a very nice way of demonstrating why the simplex method works without remedy to its terminology or methods although Charnes and Cooper describe how the SSM and PSTM are related. Charnes and Cooper note that the SSM is relatively easy to explain, but Dantzigs PSTM has certain advantages for large-scale hand calculations (Saul I. Gass, 1990) However, the SSM, contrary to the impression one gets from some texts and from the paper by Arsham and Kahn, is not the method of choice for those who are serious about solving the CTP-such as an analyst who is concerned with solving quite large problems and may have to solve such problems repetitively, e.g. where m = 100 origins and n = 200 destinations, leading to a mathematical problem of 299 independent constraints and 20,000 variables (Saul I. Gass, 1990). In addition to the PSTM and the SSM, a number of methods have been proposed to solve the CTP. They include (amongst others) the following: the dual method of Ford and Fulkerson, the primal partitioning method of Grigoriadis and Walker, the dualplex partitioning method of Gass, the Hungarian method adaptation by Munkres, the shortest path approach of Hoffman and Markowitz and its extension by Lageman, the decomposition approach of Williams, the primal Hungarian method of Balinski and Gomory, and, more recently, the tableau-dual method proposed by Arsham and Kahn. (The early solution attempts of Kantorovich, Hitchcock and Koopmans are excluded as they did not lead to general computational methods.) (Saul I. Gass, 1990). The first papers that dealt with machine-based computational issues for solving the TP are Suzuki, Dennis and Ford and Fulkerson. Implementations of CTP algorithms were quite common on the wide range of 1950s and 1960s computers-a listing is given in Gass. CTP computer-based procedures at that time included Charnes and Coopers SSM, the flow (Hungarian) method of Ford and Fulkerson, Munkres Hungarian method, the modified simplex method of Suzuki, Dantzigs PSTM and Dennis implementation of the PSTM. The developers of these early computer codes investigated procedures for finding first feasible solutions such as VAM, the north-west corner method (NWCM), and variations of minimum-cost allocation procedures (Saul I. Gass, 1990). They also investigated various criteria for selecting a variable to enter the basis. Problems of realistic size could be solved, e.g. m + n The work of Glover et al. represents a landmark in the development of a TP computer-based algorithm and in computational testing. Their code is a PSTM that uses special list structures for maintaining and changing bases and updating prices. Glover et al. tested various first-basis finding procedures and selection rules for determining the variable to enter the new basis. They concluded that the best way to determine a first feasible solution is a modified row-minimum rule, in which the rows are cycled through in order, each time selecting a single cell with the minimum cost to enter the basis. The cycling continues until all row supplies are exhausted. This differs from the standard row-minimum rule, in which the minimum cost cells are selected in each row, starting with the first row, until the current row supply is exhausted. The modified row minimum rule was tested against the NWCM, the VAM, a row-minimum rule and a row-column minimum rule in which a row is scanned first for a min imum cell and then a column is scanned, depending on whether the supply or demand is exhausted (Saul I. Gass, 1990). Although VAM tended to decrease the number of basis changes to find the optimal solution, it takes an inordinate amount of time to find an initial solution, especially when compared to the time to perform a basis change (100 changes for 100 x 100 problem in 0.5 s on a CDC 6400 computer). We feel VAM should be relegated to hand computations, if that. Glover et al. tested a number of rules for determining the variable to enter the basis, including the standard most negative evaluator rule. Their computational results demonstrated that a modified row-first negative evaluator rule was computationally most efficient. This rule scans the rows of the transportation cost tableau until it encounters the first row containing a candidate cell, and then selects the cell in this row which violates dual feasibility by the largest amount. They also compared their method to the main competitive algorithms in vogue at that time, i.e. the minimum-cost network out-of-kilter method adapted to solve the TP, the standard simplex method for solving the general linear-programming problem and a dual simplex method for solving a CTP. The results of the comparison showed that the Glover et al. method was six times faster than the best of the competitive methods (Saul I. Gass, 1990). . A summary of computational times for their method showed that the median solution time for solving 1000 x 1000 TPs on a CDC 6000 computer was 17 s, with a range of 14-22 s. As the TP is a special case of a minimum-cost network problem (transhipment problem), methods for solving the latter-type problem (such as the out-of-kilter method) are readily adaptable for solving CTPs. Bradley et al. developed a primal method for solving large-scale trans- shipment problems that utilizes special data structures for basis representation, basis manipulation and pricing. Their code, GNET, has also been specialized to a code (called TNET) for solving capacitated TPs. Various pricing rules for selecting the incoming variable were tested, and a representative 250 x 4750 problem was solved in 135 s on an IBM/360/67 using TNET, with the number of pivots and total time being a function of the pricing rule. The GNET procedure has also been embedded into the MPSIII computer-based system for solving linear-programming problems developed by Ketron Management Science Inc.24 It is called WHIZNET and is designed to solve capacitated trans-shipment problems, of which the TP is a special case. A typical trans-shipment problem with 5000 nodes and 23,000 arcs was solved in 37.5 s on an IBM 3033/N computer (L. Collatz and W. Wetterling, 1975). Another general network problem-solver, called PNET, is a primal simplex method for solving capacitated and uncapacitated transhipment and TPs. It solved a TP with 2500 origins and 2500 destinations in under 4 min of CPU time on a UNIVAC 1108. It uses augmented thread index lists for the bases and dual variables. (Saul I. G ass, 1990). From the above, we see that the present day state-of-the-art for solving TPs on mainframe computers is quite advanced. With the advent of PCs, we find that a number of researchers and software houses have developed PC-based codes for solving TPs. Many of the codes were developed for the classroom and are capable of solving only small, textbook-size problems. For example, the TP procedure in Erikson and Hall (Saul I. Gass, 1990) is able to solve problems of the order of 20 x 20. A typical commercial TP program is that of Eastern Softwares TSP88 which can solve TPs with up to 510 origins and/or destinations. It is unclear as to what algorithms are used in the PC TP codes, but we hazard a guess that they are a version of either PSTM or SSM (Saul I. Gass, 1990). 2.5 Degeneracy in the Classic transportation problem Degeneracy can occur when the initial feasible solution has a cell with zero allocation or when, as a result of real reallocation, more than one previously allocated cell has a new zero allocation. Whenever we are solving a CTP by the PSTM or the SSM, we must determine a set of non-negative values of the variables that not only satisfies the origin and destination constraints, but also corresponds to a basic feasible solution with m + n -1 variables (Saul I. Gass, 1990). . For computational efficiency, all basic cells are kept in a list, with those cells forming the loop being ordered at the top of the list and with the entering cell being first in the list. The remaining cells in the loop are sequenced such that proceeding through them follows the loop. The use of the allocated cells easily handles degeneracy. The PSTM and the SSM do not use a representation of the basis inverse, as does the general simplex method. Instead, these methods take advantage of the fact that any basis to the TP corresponds to a spanning tree of the bipartite network that describes the flows from the origin nodes to the destination nodes (G.B. Dantzig, 1963). Thus, if one is given a basic feasible solution to a CTP which can be readily generated by, say, the NWCM and that solution is degenerate, then one must determine which of the arcs with zero flow should be selected to complete the tree. Having the tree that corresponds to the current basic feasible solution enables us t o determine if the current solution is optimal and, if it is not, to determine the entering and leaving variables and the values of the variables in the new solution (Saul I. Gass, 1990). The problem of selecting a tree for a degenerate basic feasible solution to a CTP was recognized early by Dantzig (G.B. Dantzig, 1963) who described a simple perturbation procedure that caused all basic feasible solutions to be non-degenerate. From our literature gathered from above, the computer-based CTP solution methods described above, degeneracy does not appear to be of concern. We gather that most computer- based methods for solving CTPs invoke some type of perturbation procedure to complete the tree. We note that the problem of selecting a tree for a degenerate basic feasible solution is really only a minor problem if the first basic feasible solution is degenerate. For this case, a perturbation scheme or a simple selection rule that selects a variable or variables with zero value to complete the tree can be applied. (L. Collatz and W. Wetterling, 1975) and (G. Hadley, 1962). As the selection of appropriate zero-valued variables is usually not unique, a simple decision rule is used to make a choice, e.g. to select those variables that have the smallest costs. Once a tree has been established for the first basic feasible solution, the SSM and PSTM prescriptions for changing bases will always yield a new basic feasible solution and corresponding tree, no matter how many degenerate basic feasible variables there are. Subsequent degenerate basic feasible solutions can be generated if there are ties in the selection of a variable to leave the basis. Dropping one and keeping those that were tied at zero level will always yield a tree. Again, a simple decision rule is used to determine which one is dropped from the basis (Saul I. Gass, 1990). Degeneracy can be of concern in that it could cause a series of new bases to be generated without decreasing the value of the objective function-a phenomenon termed stalling. In their paper, Gavish et al. (B. Gavish, P. Schweitzer and E. Shlifer, 1977) study the zero pivot phenomenon in the CTP and assignment problem (AP) and develop rules for reducing stalling, i.e. reducing the number of zero pivots (Saul I. Gass, 1990). For various size (randomly generated) problems, they show that for the CTP the average percentage of zero pivots to total pivots can be quite high, ranging from 8% for 5 x 5 problems to 89% for 250 x 250 problems which are started with the modified row-minimum rule for selecting the first basic feasible solution. They also show that the percentage of zero pivots is not sensitive to the range of values of the cost coefficients, but is sensitive to the range of values of the ais and bjs, with a higher percentage of zero pivots occurring when the latter range is tight. For the m x m AP, which will always have (m 1) basic variables that are zero, the average percentage of zero pivots ranged from 66% for 5 x 5 problems to 95% for 250 x 250 problems. Their rules for selecting a first basic feasible solution, the variable to enter the basis and the variable to leave the basis cause a significant reduction in total computational time (Saul I. Gass, 1990). In their paper, Shafaat and Goyal (A. Shafatt and A.B. Goyal, 1988) develop a procedure for selecting a basic feasible solution with a single degeneracy such that the next solution will improve the objective function value. There procedure forces the entering variable to have an exchange loop that does not involve the degenerate position with a negative increment (Saul I. Gass, 1990). The efficiency of their procedure in terms of computer time versus the small amount of computer time required to perform a number of basis changes (as noted above) is unclear. For large-scale CTPs, we conjecture that a single degenerate basic feasible solution will not cause much stalling, as the chances are that the entering variable will not be on an exchange loop that contains the degenerate variable. We note that a CTP or a linear-programming problem in general, with single degenerate basic feasible solutions will not cycle (Saul I. Gass, 1990). 2.6 Method of finding Initial Basic Feasible Solutions A basic solution is any collection of (n + m 1) cells that does not include a dependent subset. The basic solution is the assignment of flows to the basic cells that satisfies the supply and demand constraints. The solution is feasible if all the flows are non negative. From the theory of linear programming we know that there is an optimal solution that is a feasible solution. The CTP has n+ m constraints with one redundant constraint. A basic solution for this problem is determined by selection (n + m 1) independent variables. The basic variable assumes values to satisfy the supplies and demands, while the non basic values are zero. Thus the m + n equations are linearly dependent. As we will see, the CTP algorithm exploits this redundancy. There are five methods used to determine the initial basic feasible solutions of the classic transportation problem (CTP): these are listed below. The least cost method The northwest corner method The Vogels approximation method Row minimum method Column minimum method The five methods normally differ in the quality of the starting basic solution they produce and better starting solutions yields a smaller objective value. Some heuristics give better performance than the given common methods. The NWCM gives a solution very far from optimal solution. The least cost method finds a better starting solution by concentrating on the cheapest route. The Vogels Approximation method (VAM) is an improved version of the least cost method that generally produces better starting solutions. The row minimum method starts with first row and chooses the lowest cost cell of first row so that either the capacity of the first supply is exhausted or the demand at jth distribution centre is satisfied or both. The column minimum method starts with first column and chooses the lowest cost cell of first column so that either the demand of the first distribution centre is satisfied or the capacity of the ith supply is exhausted or both. However, among the five methods listed above, the North West Corner Method (NWCM), the Lowest Cost Method (LCM), and the Vogels Approximation method are the most commonly used methods used in finding the initial basic feasible solutions of the CTP. The NWCM gives a solution very far from optimal solution and Vogels Approximation method and LCM tries to give result that are often optimal or near to optimal solution. In a real-time application, Vogels Approximation Method (VAM) will yield a considerable savings over a period of time. On the other hand, if ease of programming and memory space are major considerations, the NWCM is still acceptable for reasonable matrix sizes (up to 50 X 50). However, the difference in times between the two loading techniques increases exponentially. (Totschek and Wood,2004). Another work presents an alternative of Vogels Approximation Method for TP and this method is more efficient than traditional Vogels Approximation Method (Mathirajan, Meenakshi, 2004). In this project however, we are making use of the Northwest Corner method (NWCM) and the Least Cost Method (LCM) to find the initial basic feasible solutions to the CTP. These solutions will then be used further to get optimal solutions to the CTP by using the Stepping Stone Method (SSM). The final answers will then be compared with the solutions procedures obtained from the GIS software environment to solve the CTP in a method other than the sophisticated mathematical solutions already explained in this literature. Methods of finding Optimal Solution of the CTP Basically two universal methods are used for finding optimal solutions. These are the Stepping Stone method and the Modified Distribution Method (MODI) method. Some heuristics are generated to getting better performance. Different methods are compared for speed factor. Transportation Simplex Method and Genetic Algorithms are compared in terms of accuracy and speed when a large-scale problem is being solved. Genetic Algorithms prove to be more efficient as the size of the problem becomes greater (Kumar and Schilling, 2004). Proposed digital computer technique for solving the CTP is by the stepping stone method. The average time required to perform an iteration using the method described here depends linearly on the size of the problem, m + n. (Dennis). The solution of a real world problem to efficiently transport multiple commodities from multiple sources to multiple different destinations using a finite fleet of heterogeneous vehicles in the smallest number of discrete time periods g ives improvement by backward decomposition (Poh, Choo and Wong, 2005).The most efficient method for solving CTP arises by coupling a primal transportation algorithm with a modified row minimum start rule and a modified row first negative evaluator rule. (Glover, Karney, Kligman, Napier, 1974) this has already been explained above. Application Software Geographic Information Systems (GIS) is a field of with an exponential growth that has a pervasive reach into everyday life. Basically, GIS provides a mean to convert data from tables with topological information into maps. Subsequently GIS tools are capable of not only solving a wide range of spatially related problems, but also performing simulations to help expert users organized their work in many areas, including public administration, transportation networks, transportation networks and environmental applications. Below gives some of the software that has been used by many researchers in transportation modeling. Much software have been used to solve the CTP problem for example, the MODI Algorithm was coded in FORTRAN V, and further substantial time reductions may result by a professional coding of the algorithm in Assembler language. Zimmer reported that a 20-to-1 time reduction was possible by using Assembler rather than FORTRAN in coding minimum path algorithms. (Srinivasan and Thompson, 1973).One work investigated generalized network problems in which flow conservation is not maintained because of cash management, fuel management, capacity expansion etc (Gottlieb,2002). Optimal solution to the pure problem could be used to solve the generalized network problem. One work introduces a generalized formulation that addresses the divide between spatially aggregate and disaggregate location modelling (Horner and OKelly, 2005). In this research we are making use of ArcGIS Network analyst, together with ArcMap, ArcCatalog, VBA, Python, PuLP, GLPK (GNU Linear Programming Kit) and ArcObject software to design our model to solve the CTP problem. A detail solution algorithm is explained in chapter 4. The GLPK (GNU Linear Programming Kit) is an open source software package intended for solving large scale linear programming (LP), mixed integer programming (MIP), and other related problems. It is a set of routine written in ANSI C and organized in the form a callable library. The GLPK package includes the following main components: Primal and dal simplex methods Primal-dual interior- point method Branch and- cut method Application program interface (API) Translator for GNU Math Program Stand-alone LP/MIP solver PuLP is a LP modeller written in Python. PuLP can generate LP and MPS files and call GLPK, to solve linear problems. PuLP provides a nice syntax for creation of linear problems, and a simple way to call the solvers to perform the optimization. ArcGIS Network Analyst is still relatively new software, so there are not much published materials concerning its application on transportation problems. Only few researchers during the last years have reported the use of the ArcGIS Network Analyst extension in order to solve some transportation problems. ArcGIS Network Analyst (ArcGIS NA) is a powerful tool of ArcGIS desktop 9.3 that provides network- based spatial analysis including routing, travel directions, closest facility, and service area analysis. ArcGIS NA enables users to dynamically model realistic network con