Friday, November 29, 2019

Mr.Rodgers Essays - Fred Rogers, Mister Rogers, Fred Rogers

He basically saved public television. In 1969 the government wanted to cut public television funds. Mister Rogers then went to Washington where he gave an amazing merely six minute speech. By the end of the speech not only did he charm the hostile Senators, he got them to double the budget they would have initially cut down. The whole thing can be found on youtube, a video called ?Mister Rogers defending PBS to the US Senate.? ?Certain fundamentalist preachers hated him because, apparently not getting the ?kindest man who ever lived? memo, they would ask him to denounce homosexuals. Mr. Rogers?s response? He?d pat the target on the shoulder and say, ?God loves you just as you are.? Rogers even belonged to a ?More Light? congregation in Pittsburgh, a part of the Presbyterian Church dedicated to welcoming LGBT persons to full participation in the church.? According to a TV Guide piece on him, Fred Rogers drove a plain old Impala for years. One day, however, the car was stolen from the street near the TV station. When Rogers filed a police report, the story was picked up by every newspaper, radio and media outlet around town. Amazingly, within 48 hours the car was left in the exact spot where it was taken from, with an apology on the dashboard. It read, ?If we?d known it was yours, we never would have taken it.? Once, on a fancy trip up to a PBS exec?s house, he heard the limo driver was going to wait outside for 2 hours, so he insisted the driver come in and join them (which flustered the host). On the way back, Rogers sat up front, and when he learned that they were passing the driver?s home on the way, he asked if they could stop in to meet his family. According to the driver, it was one of the best nights of his life?the house supposedly lit up when Rogers arrived, and he played jazz piano and bantered with them late into the night. Further, like with the reporters, Rogers sent him notes and kept in touch with the driver for the rest of his life.

Monday, November 25, 2019

How to Solve Marketing Fire Drills with Kyle DeWeerdt from Apprenda [PODCAST]

How to Solve Marketing Fire Drills with Kyle DeWeerdt from Apprenda [PODCAST] Marketing fire drills: Can you learn to take care of them before they turn into bona fide emergencies? It can be stressful and overwhelming when projects crop up with little to no notice. Planning where you can and having good communication with your team can help you get through it with no negative ramifications. Today’s guest is Kyle DeWeerdt, marketing programs manager at Apprenda. He has come up with a simple system to help his team prioritize their time to complete their work, nipping stressful emergencies in the bud. He’s going to help us learn how to resolve issues before they even start. Some of the topics you’ll hear about today include: Some information about Apprenda and the types of content that Kyle works with, as well as Kyle’s background. An explanation of â€Å"marketing fire drills†: What are they, and what can you do about them? An explanation of buffer time, and how it can help you handle these emergencies that come up. How to break down a project to define a deadline and a publish date for content. How Kyle manages the process behind the scenes with multiple teams to make sure every task is completed on time. Kyle’s best tips for marketers who want to manage their projects more efficiently.

Thursday, November 21, 2019

A Good Leader Essay Example | Topics and Well Written Essays - 500 words

A Good Leader - Essay Example I am required to plan and organize events and ensure the harmony of group members, is a great experience for future positions in the business world where it might be necessary for me to organize corporate gatherings and facilitate contract negotiations. I will defend it whether it is right or wrong. I entirely disagree. I perceive myself as a malleable person and I also believe I consider other’s opinions. If I defend my opinion, it means that I am confident about its validity. I believe that if I am stubborn about my ideas it is in regards to my ambition and desire to see projects through to their full potential. However, I realize that candidness towards other people’s ideas is very important in business. Without being open to other people’s ideas and perspectives, it is impossible to successfully collaborate. In business, effective collaboration is built on the open trust and freedom of expression of all group members. Through this open environment, the group is then able to compare ideas and attain a goal that would be impossible the sole efforts of an individual. Even as I ultimately see myself as a leader, I think it’s important to consider Franklin Roosevelt who said, â€Å"A good leader can't get too far ahead of his followers.† While I hope to function as a strong beacon of direction for my friends, I realize that it’s important to not forget the essential similarity of all human-kind and that the great thing about having friends is the chance to share the great journey of life with someone that understands.

Wednesday, November 20, 2019

Analyze how innovation, design, and creativity at Mcdonalds support Research Paper

Analyze how innovation, design, and creativity at Mcdonalds support the organizations goals and objectives - Research Paper Example When it comes to its design of leadership, McDonalds top management has ensured that they are customer oriented and hence has engaged in corporate social responsibility to work hand on hand with its customers who are part of the larger community in order to fulfill its values and objectives. This step has increased the fame of the food stores and even increased its customer base. McDonalds have also invested in having different designs of their worker’s uniforms depending on the occasion or where they are serving their customers, this range from entertaining children in their numerous playgrounds and even serving customers in their dinners. Their creativity is evident from their logo which is unique and identifies it wherever it is. They are also creative in their advertisements and in the advertisements they sponsor. McDonald’s different designs of their restaurants including drive-in ones that serve the customer needs wherever they are is also an indication of their creativity and which goes a long way to fulfill the goal of McDonalds of serving fast food to all people and at their own

Monday, November 18, 2019

Understanding Geospatial Data in Development Assignment

Understanding Geospatial Data in Development - Assignment Example A band ratio approach can be used by diving band 5 by band 2 in order to separate the water line from the clouds. The rate of change of the coastline can be calculated for transects greater than 16000 and generated at intervals of 50 m along the coastline and the main islands. This can be done using the End point Rate technique in the Digital Shoreline Analysis System in ArcGIS. Bangladesh is located at the mouth of Brahmaputra and Ganges which are the two largest rivers in the world flowing from the Himalayas. A large part of the country is located in the Bengal basin which is an extensive geosyncline and has a large population of about 14.2 million people. Most people live in the low lying plains floodplains and delta plains which are usually very vulnerable to flooding during the monsoon season (Alesheikh et al, 2007). As a result, Bangladesh is normally considered as one of the most risky countries in the world due to exposure to the effects of climate change and sea level rises. The coastline of Bangladesh covers an area of about 47,201square kilometers and this region is inhabited by about 46 million people. River Ganges drains about 1114000 square kilometers of catchment area and the River Brahmaputra drains 935000 square kilometers of catchment area and these supplies billions of tonnes of sediments every year in the Bengal basin. This rapid increas e in sedimentation results into a very rapid accretion in the estuaries (Goodbred, 2003). In other sections of the coastline where rapid erosion is experienced due to strong tidal currents and strong waves action, rapid subsidence can be noted with a recession of about 3-4 km of the shoreline from its original position. If we compare the Landsat images between 1973 and 2000, the recession rate of the shoreline and the time frame can be established (Benny, 2000). By comparing the satellite images

Saturday, November 16, 2019

Identifying Clusters in High Dimensional Data

Identifying Clusters in High Dimensional Data â€Å"Ask those who remember, are mindful if you do not know).† (Holy Quran, 6:43) Removal Of Redundant Dimensions To Find Clusters In N-Dimensional Data Using Subspace Clustering Abstract The data mining has emerged as a powerful tool to extract knowledge from huge databases. Researchers have introduced several machine learning algorithms to explore the databases to discover information, hidden patterns, and rules from the data which were not known at the data recording time. Due to the remarkable developments in the storage capacities, processing and powerful algorithmic tools, practitioners are developing new and improved algorithms and techniques in several areas of data mining to discover the rules and relationship among the attributes in simple and complex higher dimensional databases. Furthermore data mining has its implementation in large variety of areas ranging from banking to marketing, engineering to bioinformatics and from investment to risk analysis and fraud detection. Practitioners are analyzing and implementing the techniques of artificial neural networks for classification and regression problems because of accuracy, efficiency. The aim of his short r esearch project is to develop a way of identifying the clusters in high dimensional data as well as redundant dimensions which can create a noise in identifying the clusters in high dimensional data. Techniques used in this project utilizes the strength of the projections of the data points along the dimensions to identify the intensity of projection along each dimension in order to find cluster and redundant dimension in high dimensional data. 1 Introduction In numerous scientific settings, engineering processes, and business applications ranging from experimental sensor data and process control data to telecommunication traffic observation and financial transaction monitoring, huge amounts of high-dimensional measurement data are produced and stored. Whereas sensor equipments as well as big storage devices are getting cheaper day by day, data analysis tools and techniques wrap behind. Clustering methods are common solutions to unsupervised learning problems where neither any expert knowledge nor some helpful annotation for the data is available. In general, clustering groups the data objects in a way that similar objects get together in clusters whereas objects from different clusters are of high dissimilarity. However it is observed that clustering disclose almost no structure even it is known there must be groups of similar objects. In many cases, the reason is that the cluster structure is stimulated by some subsets of the spaces dim ensions only, and the many additional dimensions contribute nothing other than making noise in the data that hinder the discovery of the clusters within that data. As a solution to this problem, clustering algorithms are applied to the relevant subspaces only. Immediately, the new question is how to determine the relevant subspaces among the dimensions of the full space. Being faced with the power set of the set of dimensions a brute force trial of all subsets is infeasible due to their exponential number with respect to the original dimensionality. In high dimensional data, as dimensions are increasing, the visualization and representation of the data becomes more difficult and sometimes increase in the dimensions can create a bottleneck. More dimensions mean more visualization or representation problems in the data. As the dimensions are increased, the data within those dimensions seems dispersing towards the corners / dimensions. Subspace clustering solves this problem by identifying both problems in parallel. It solves the problem of relevant subspaces which can be marked as redundant in high dimensional data. It also solves the problem of finding the cluster structures within that dataset which become apparent in these subspaces. Subspace clustering is an extension to the traditional clustering which automatically finds the clusters present in the subspace of high dimensional data space that allows better clustering the data points than the original space and it works even when the curse of dimensionality occurs. The most o f the clustering algorithms have been designed to discover clusters in full dimensional space so they are not effective in identifying the clusters that exists within subspace of the original data space. The most of the clustering algorithms produces clustering results based on the order in which the input records were processed [2]. Subspace clustering can identify the different cluster within subspaces which exists in the huge amount of sales data and through it we can find which of the different attributes are related. This can be useful in promoting the sales and in planning the inventory levels of different products. It can be used for finding the subspace clusters in spatial databases and some useful decisions can be taken based on the subspace clusters identified [2]. The technique used here for indentifying the redundant dimensions which are creating noise in the data in order to identifying the clusters consist of drawing or plotting the data points in all dimensions. At second step the projection of all data points along each dimension are plotted. At the third step the unions of projections along each dimension are plotted using all possible combinations among all no. of dimensions and finally the union of all projection along all dimensions and analyzed, it will show the contribution of each dimension in indentifying the cluster which will be represented by the weight of projection. If any of the given dimension is contributing very less in order to building the weight of projection, that dimension can be considered as redundant, which means this dimension is not so important to identify the clusters in given data. The details of this strategy will be covered in later chapters. 2 Data Mining 2.1 What is Data Mining? Data mining is the process of analyzing data from different perspective and summarizing it for getting useful information. The information can be used for many useful purposes like increasing revenue, cuts costs etc. The data mining process also finds the hidden knowledge and relationship within the data which was not known while data recording. Describing the data is the first step in data mining, followed by summarizing its attributes (like standard deviation mean etc). After that data is reviewed using visual tools like charts and graphs and then meaningful relations are determined. In the data mining process, the steps of collecting, exploring and selecting the right data are critically important. User can analyze data from different dimensions categorize and summarize it. Data mining finds the correlation or patterns amongst the fields in large databases. Data mining has a great potential to help companies to focus on their important information in their data warehouse. It can predict the future trends and behaviors and allows the business to make more proactive and knowledge driven decisions. It can answer the business questions that were traditionally much time consuming to resolve. It scours databases for hidden patterns for finding predictive information that experts may miss it might lies beyond their expectations. Data mining is normally used to transform the data into information or knowledge. It is commonly used in wide range of profiting practices such as marketing, fraud detection and scientific discovery. Many companies already collect and refine their data. Data mining techniques can be implemented on existing platforms for enhance the value of information resources. Data mining tools can analyze massive databases to deliver answers to the questions. Some other terms contains similar meaning from data mining such as â€Å"Knowledge mining† or â€Å"Knowledge Extraction† or â€Å"Pattern Analysis†. Data mining can also be treated as a Knowledge Discovery from Data (KDD). Some people simply mean the data mining as an essential step in Knowledge discovery from a large data. The process of knowledge discovery from data contains following steps. * Data cleaning (removing the noise and inconsistent data) * Data Integration (combining multiple data sources) * Data selection (retrieving the data relevant to analysis task from database) * Data Transformation (transforming the data into appropriate forms for mining by performing summary or aggregation operations) * Data mining (applying the intelligent methods in order to extract data patterns) * Pattern evaluation (identifying the truly interesting patterns representing knowledge based on some measures) * Knowledge representation (representing knowledge techniques that are used to present the mined knowledge to the user) 2.2 Data Data can be any type of facts, or text, or image or number which can be processed by computer. Todays organizations are accumulating large and growing amounts of data in different formats and in different databases. It can include operational or transactional data which includes costs, sales, inventory, payroll and accounting. It can also include nonoperational data such as industry sales and forecast data. It can also include the meta data which is, data about the data itself, such as logical database design and data dictionary definitions. 2.3 Information The information can be retrieved from the data via patterns, associations or relationship may exist in the data. For example the retail point of sale transaction data can be analyzed to yield information about the products which are being sold and when. 2.4 Knowledge Knowledge can be retrieved from information via historical patterns and the future trends. For example the analysis on retail supermarket sales data in promotional efforts point of view can provide the knowledge buying behavior of customer. Hence items which are at most risk for promotional efforts can be determined by manufacturer easily. 2.5 Data warehouse The advancement in data capture, processing power, data transmission and storage technologies are enabling the industry to integrate their various databases into data warehouse. The process of centralizing and retrieving the data is called data warehousing. Data warehousing is new term but concept is a bit old. Data warehouse is storage of massive amount of data in electronic form. Data warehousing is used to represent an ideal way of maintaining a central repository for all organizational data. Purpose of data warehouse is to maximize the user access and analysis. The data from different data sources are extracted, transformed and then loaded into data warehouse. Users / clients can generate different types of reports and can do business analysis by accessing the data warehouse. Data mining is primarily used today by companies with a strong consumer focus retail, financial, communication, and marketing organizations. It allows these organizations to evaluate associations between certain internal external factors. The product positioning, price or staff skills can be example of internal factors. The external factor examples can be economic indicators, customer demographics and competition. It also allows them to calculate the impact on sales, corporate profits and customer satisfaction. Furthermore it allows them to summarize the information to look detailed transactional data. Given databases of sufficient size and quality, data mining technology can generate new business opportunities by its capabilities. Data mining usually automates the procedure of searching predictive information in huge databases. Questions that traditionally required extensive hands-on analysis can now be answered directly from the data very quickly. The targeted marketing can be an example of predictive problem. Data mining utilizes data on previous promotional mailings in order to recognize the targets most probably to increase return on investment as maximum as possible in future mailings. Tools used in data mining traverses through huge databases and discover previously unseen patterns in single step. Analysis on retail sales data to recognize apparently unrelated products which are usually purchased together can be an example of it. The more pattern discovery problems can include identifying fraudulent credit card transactions and identifying irregular data that could symbolize data entry input errors. When data mining tools are used on parallel processing systems of high performance, they are able to analy ze huge databases in very less amount of time. Faster or quick processing means that users can automatically experience with more details to recognize the complex data. High speed and quick response makes it actually possible for users to examine huge amounts of data. Huge databases, in turn, give improved and better predictions. 2.6 Descriptive and Predictive Data Mining Descriptive data mining aims to find patterns in the data that provide some information about what the data contains. It describes patterns in existing data, and is generally used to create meaningful subgroups such as demographic clusters. For example descriptions are in the form of Summaries and visualization, Clustering and Link Analysis. Predictive Data Mining is used to forecast explicit values, based on patterns determined from known results. For example, in the database having records of clients who have already answered to a specific offer, a model can be made that predicts which prospects are most probable to answer to the same offer. It is usually applied to recognize data mining projects with the goal to identify a statistical or neural network model or set of models that can be used to predict some response of interest. For example, a credit card company may want to engage in predictive data mining, to derive a (trained) model or set of models that can quickly identify tr ansactions which have a high probability of being fraudulent. Other types of data mining projects may be more exploratory in nature (e.g. to determine the cluster or divisions of customers), in which case drill-down descriptive and tentative methods need to be applied. Predictive data mining is goad oriented. It can be decomposed into following major tasks. * Data Preparation * Data Reduction * Data Modeling and Prediction * Case and Solution Analysis 2.7 Text Mining The Text Mining is sometimes also called Text Data Mining which is more or less equal to Text Analytics. Text mining is the process of extracting/deriving high quality information from the text. High quality information is typically derived from deriving the patterns and trends through means such as statistical pattern learning. It usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output. The High Quality in text mining usually refers to some combination of relevance, novelty, and interestingness. The text categorization, concept/entity extraction, text clustering, sentiment analysis, production of rough taxonomies, entity relation modeling, document summarization can be included as text mining tasks. Text Mining is also known as the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources. Linking together of the extracted information is the key element to create new facts or new hypotheses to be examined further by more conventional ways of experimentation. In text mining, the goal is to discover unknown information, something that no one yet knows and so could not have yet written down. The difference between ordinary data mining and text mining is that, in text mining the patterns are retrieved from natural language text instead of from structured databases of facts. Databases are designed and developed for programs to execute automatically; text is written for people to read. Most of the researchers think that it will need a full fledge simulation of how the brain works before that programs that read the way people do could be written. 2.8 Web Mining Web Mining is the technique which is used to extract and discover the information from web documents and services automatically. The interest of various research communities, tremendous growth of information resources on Web and recent interest in e-commerce has made this area of research very huge. Web mining can be usually decomposed into subtasks. * Resource finding: fetching intended web documents. * Information selection and pre-processing: selecting and preprocessing specific information from fetched web resources automatically. * Generalization: automatically discovers general patterns at individual and across multiple website * Analysis: validation and explanation of mined patterns. Web Mining can be mainly categorized into three areas of interest based on which part of Web needs to be mined: Web Content Mining, Web Structure Mining and Web Usage Mining. Web Contents Mining describes the discovery of useful information from the web contents, data and documents [10]. In past the internet consisted of only different types of services and data resources. But today most of the data is available over the internet; even digital libraries are also available on Web. The web contents consist of several types of data including text, image, audio, video, metadata as well as hyperlinks. Most of the companies are trying to transform their business and services into electronic form and putting it on Web. As a result, the databases of the companies which were previously residing on legacy systems are now accessible over the Web. Thus the employees, business partners and even end clients are able to access the companys databases over the Web. Users are accessing the application s over the web via their web interfaces due to which the most of the companies are trying to transform their business over the web, because internet is capable of making connection to any other computer anywhere in the world [11]. Some of the web contents are hidden and hence cannot be indexed. The dynamically generated data from the results of queries residing in the database or private data can fall in this area. Unstructured data such as free text or semi structured data such as HTML and fully structured data such as data in the tables or database generated web pages can be considered in this category. However unstructured text is mostly found in the web contents. The work on Web content mining is mostly done from 2 point of views, one is IR and other is DB point of view. â€Å"From IR view, web content mining assists and improves the information finding or filtering to the user. From DB view web content mining models the data on the web and integrates them so that the more soph isticated queries other than keywords could be performed. [10]. In Web Structure Mining, we are more concerned with the structure of hyperlinks within the web itself which can be called as inter document structure [10]. It is closely related to the web usage mining [14]. Pattern detection and graphs mining are essentially related to the web structure mining. Link analysis technique can be used to determine the patterns in the graph. The search engines like Google usually uses the web structure mining. For example, the links are mined and one can then determine the web pages that point to a particular web page. When a string is searched, a webpage having most number of links pointed to it may become first in the list. Thats why web pages are listed based on rank which is calculated by the rank of web pages pointed to it [14]. Based on web structural data, web structure mining can be divided into two categories. The first kind of web structure mining interacts with extracting patterns from the hyperlinks in the web. A hyperlink is a structural comp onent that links or connects the web page to a different web page or different location. The other kind of the web structure mining interacts with the document structure, which is using the tree-like structure to analyze and describe the HTML or XML tags within the web pages. With continuous growth of e-commerce, web services and web applications, the volume of clickstream and user data collected by web based organizations in their daily operations has increased. The organizations can analyze such data to determine the life time value of clients, design cross marketing strategies etc. [13]. The Web usage mining interacts with data generated by users clickstream. â€Å"The web usage data includes web server access logs, proxy server logs, browser logs, user profile, registration data, user sessions, transactions, cookies, user queries, bookmark data, mouse clicks and scrolls and any other data as a result of interaction† [10]. So the web usage mining is the most important task of the web mining [12]. Weblog databases can provide rich information about the web dynamics. In web usage mining, web log records are mined to discover the user access patterns through which the potential customers can be identified, quality of internet services can be enhanc ed and web server performance can be improved. Many techniques can be developed for implementation of web usage mining but it is important to know that success of such applications depends upon what and how much valid and reliable knowledge can be discovered the log data. Most often, the web logs are cleaned, condensed and transformed before extraction of any useful and significant information from weblog. Web mining can be performed on web log records to find associations patterns, sequential patterns and trend of web accessing. The overall Web usage mining process can be divided into three inter-dependent stages: data collection and pre-processing, pattern discovery, and pattern analysis [13]. In the data collection preprocessing stage, the raw data is collected, cleaned and transformed into a set of user transactions which represents the activities of each user during visits to the web site. In the pattern discovery stage, statistical, database, and machine learning operations a re performed to retrieve hidden patterns representing the typical behavior of users, as well as summary of statistics on Web resources, sessions, and users. 3 Classification 3.1 What is Classification? As the quantity and the variety increases in the available data, it needs some robust, efficient and versatile data categorization technique for exploration [16]. Classification is a method of categorizing class labels to patterns. It is actually a data mining methodology used to predict group membership for data instances. For example, one may want to use classification to guess whether the weather on a specific day would be â€Å"sunny†, â€Å"cloudy† or â€Å"rainy†. The data mining techniques which are used to differentiate similar kind of data objects / points from other are called clustering. It actually uses attribute values found in the data of one class to distinguish it from other types or classes. The data classification majorly concerns with the treatment of the large datasets. In classification we build a model by analyzing the existing data, describing the characteristics of various classes of data. We can use this model to predict the class/type of new data. Classification is a supervised machine learning procedure in which individual items are placed in a group based on quantitative information on one or more characteristics in the items. Decision Trees and Bayesian Networks are the examples of classification methods. One type of classification is Clustering. This is process of finding the similar data objects / points within the given dataset. This similarity can be in the meaning of distance measures or on any other parameter, depending upon the need and the given data. Classification is an ancient term as well as a modern one since classification of animals, plants and other physical objects is still valid today. Classification is a way of thinking about things rather than a study of things itself so it draws its theory and application from complete range of human experiences and thoughts [18]. From a bigger picture, classification can include medical patients based on disease, a set of images containing red rose from an image database, a set of documents describing â€Å"classification† from a document/text database, equipment malfunction based on cause and loan applicants based on their likelihood of payment etc. For example in later case, the problem is to predict a new applicants loans eligibility given old data about customers. There are many techniques which are used for data categorization / classification. The most common are Decision tree classifier and Bayesian classifiers. 3.2 Types of Classification There are two types of classification. One is supervised classification and other is unsupervised classification. Supervised learning is a machine learning technique for discovering a function from training data. The training data contains the pairs of input objects, and their desired outputs. The output of the function can be a continuous value which can be called regression, or can predict a class label of the input object which can be called as classification. The task of the supervised learner is to predict the value of the function for any valid input object after having seen a number of training examples (i.e. pairs of input and target output). To achieve this goal, the learner needs to simplify from the presented data to hidden situations in a meaningful way. The unsupervised learning is a class of problems in machine learning in which it is needed to seek to determine how the data are organized. It is distinguished from supervised learning in that the learner is given only unknown examples. Unsupervised learning is nearly related to the problem of density estimation in statistics. However unsupervised learning also covers many other techniques that are used to summarize and explain key features of the data. One form of unsupervised learning is clustering which will be covered in next chapter. Blind source partition based on Independent Component Analysis is another example. Neural network models, adaptive resonance theory and the self organizing maps are most commonly used unsupervised learning algorithms. There are many techniques for the implementation of supervised classification. We will be discussing two of them which are most commonly used which are Decision Trees classifiers and Naà ¯ve Bayesian Classifiers. 3.2.1 Decision Trees Classifier There are many alternatives to represent classifiers. The decision tree is probably the most widely used approach for this purpose. It is one of the most widely used supervised learning methods used for data exploration. It is easy to use and can be represented in if-then-else statements/rules and can work well in noisy data as well [16]. Tree like graph or decisions models and their possible consequences including resource costs, chance event, outcomes, and utilities are used in decision trees. Decision trees are most commonly used in specifically in decision analysis, operations research, to help in identifying a strategy most probably to reach a target. In machine learning and data mining, a decision trees are used as predictive model; means a planning from observations calculations about an item to the conclusions about its target value. More descriptive names for such tree models are classification tree or regression tree. In these tree structures, leaves are representing class ifications and branches are representing conjunctions of features those lead to classifications. The machine learning technique for inducing a decision tree from data is called decision tree learning, or decision trees. Decision trees are simple but powerful form of multiple variable analyses [15]. Classification is done by tree like structures that have different test criteria for a variable at each of the nodes. New leaves are generated based on the results of the tests at the nodes. Decision Tree is a supervised learning system in which classification rules are constructed from the decision tree. Decision trees are produced by algorithms which identify various ways splitting data set into branch like segment. Decision tree try to find out a strong relationship between input and target values within the dataset [15]. In tasks classification, decision trees normally visualize that what steps should be taken to reach on classification. Every decision tree starts with a parent node called root node which is considered to be the parent of every other node. Each node in the tree calculates an attribute in the data and decides which path it should follow. Typically the decision test is comparison of a value against some constant. Classification with the help of decision tree is done by traversing from the root node up to a leaf node. Decision trees are able to represent and classify the diverse types of data. The simplest form of data is numerical data which is most familiar too. Organizing nominal data is also required many times in many situations. Nominal quantities are normally represented via discrete set of symbols. For example weather condition can be described in either nominal fashion or numeric. Quantification can be done about temperature by saying that it is eleven degrees Celsius or fifty two degrees Fahrenheit. The cool, mild, cold, warm or hot terminologies can also be sued. The former is a type of numeric data while and the latter is an example of nominal data. More precisely, the example of cool, mild, cold, warm and hot is a special type of nominal data, expressed as ordinal data. Ordinal data usually has an implicit assumption of ordered relationships among the values. In the weather example, purely nominal description like rainy, overcast and sunny can also be added. These values have no relationships or distance measures among each other. Decision Trees are those types of trees where each node is a question, each branch is an answer to a question, and each leaf is a result. Here is an example of Decision tree. Roughly, the idea is based upon the number of stock items; we have to make different decisions. If we dont have much, you buy at any cost. If you have a lot of items then you only buy if it is inexpensive. Now if stock items are less than 10 then buy all if unit price is less than 10 otherwise buy only 10 items. Now if we have 10 to 40 items in the stock then check unit price. If unit price is less than 5 £ then buy only 5 items otherwise no need to buy anything expensive since stock is good already. Now if we have more than 40 items in the stock, then buy 5 if and only if price is less than 2 £ otherwise no need to buy too expensive items. So in this way decision trees help us to make a decision at each level. Here is another example of decision tree, representing the risk factor associated with the rash driving. The root node at the top of the tree structure is showing the feature that is split first for highest discrimination. The internal nodes are showing decision rules on one or more attributes while leaf nodes are class labels. A person having age less than 20 has very high risk while a person having age greater than 30 has a very low risk. A middle category; a person having age greater than 20 but less than 30 depend upon another attribute which is car type. If car type is of sports then there is again high risk involved while if family car is used then there is low risk involved. In the field of sciences engineering and in the applied areas including business intelligence and data mining, many useful features are being introduced as the result of evolution of decision trees. * With the help of transformation in decision trees, the volume of data can be reduced into more compact form that preserves the major characteristic Identifying Clusters in High Dimensional Data Identifying Clusters in High Dimensional Data â€Å"Ask those who remember, are mindful if you do not know).† (Holy Quran, 6:43) Removal Of Redundant Dimensions To Find Clusters In N-Dimensional Data Using Subspace Clustering Abstract The data mining has emerged as a powerful tool to extract knowledge from huge databases. Researchers have introduced several machine learning algorithms to explore the databases to discover information, hidden patterns, and rules from the data which were not known at the data recording time. Due to the remarkable developments in the storage capacities, processing and powerful algorithmic tools, practitioners are developing new and improved algorithms and techniques in several areas of data mining to discover the rules and relationship among the attributes in simple and complex higher dimensional databases. Furthermore data mining has its implementation in large variety of areas ranging from banking to marketing, engineering to bioinformatics and from investment to risk analysis and fraud detection. Practitioners are analyzing and implementing the techniques of artificial neural networks for classification and regression problems because of accuracy, efficiency. The aim of his short r esearch project is to develop a way of identifying the clusters in high dimensional data as well as redundant dimensions which can create a noise in identifying the clusters in high dimensional data. Techniques used in this project utilizes the strength of the projections of the data points along the dimensions to identify the intensity of projection along each dimension in order to find cluster and redundant dimension in high dimensional data. 1 Introduction In numerous scientific settings, engineering processes, and business applications ranging from experimental sensor data and process control data to telecommunication traffic observation and financial transaction monitoring, huge amounts of high-dimensional measurement data are produced and stored. Whereas sensor equipments as well as big storage devices are getting cheaper day by day, data analysis tools and techniques wrap behind. Clustering methods are common solutions to unsupervised learning problems where neither any expert knowledge nor some helpful annotation for the data is available. In general, clustering groups the data objects in a way that similar objects get together in clusters whereas objects from different clusters are of high dissimilarity. However it is observed that clustering disclose almost no structure even it is known there must be groups of similar objects. In many cases, the reason is that the cluster structure is stimulated by some subsets of the spaces dim ensions only, and the many additional dimensions contribute nothing other than making noise in the data that hinder the discovery of the clusters within that data. As a solution to this problem, clustering algorithms are applied to the relevant subspaces only. Immediately, the new question is how to determine the relevant subspaces among the dimensions of the full space. Being faced with the power set of the set of dimensions a brute force trial of all subsets is infeasible due to their exponential number with respect to the original dimensionality. In high dimensional data, as dimensions are increasing, the visualization and representation of the data becomes more difficult and sometimes increase in the dimensions can create a bottleneck. More dimensions mean more visualization or representation problems in the data. As the dimensions are increased, the data within those dimensions seems dispersing towards the corners / dimensions. Subspace clustering solves this problem by identifying both problems in parallel. It solves the problem of relevant subspaces which can be marked as redundant in high dimensional data. It also solves the problem of finding the cluster structures within that dataset which become apparent in these subspaces. Subspace clustering is an extension to the traditional clustering which automatically finds the clusters present in the subspace of high dimensional data space that allows better clustering the data points than the original space and it works even when the curse of dimensionality occurs. The most o f the clustering algorithms have been designed to discover clusters in full dimensional space so they are not effective in identifying the clusters that exists within subspace of the original data space. The most of the clustering algorithms produces clustering results based on the order in which the input records were processed [2]. Subspace clustering can identify the different cluster within subspaces which exists in the huge amount of sales data and through it we can find which of the different attributes are related. This can be useful in promoting the sales and in planning the inventory levels of different products. It can be used for finding the subspace clusters in spatial databases and some useful decisions can be taken based on the subspace clusters identified [2]. The technique used here for indentifying the redundant dimensions which are creating noise in the data in order to identifying the clusters consist of drawing or plotting the data points in all dimensions. At second step the projection of all data points along each dimension are plotted. At the third step the unions of projections along each dimension are plotted using all possible combinations among all no. of dimensions and finally the union of all projection along all dimensions and analyzed, it will show the contribution of each dimension in indentifying the cluster which will be represented by the weight of projection. If any of the given dimension is contributing very less in order to building the weight of projection, that dimension can be considered as redundant, which means this dimension is not so important to identify the clusters in given data. The details of this strategy will be covered in later chapters. 2 Data Mining 2.1 What is Data Mining? Data mining is the process of analyzing data from different perspective and summarizing it for getting useful information. The information can be used for many useful purposes like increasing revenue, cuts costs etc. The data mining process also finds the hidden knowledge and relationship within the data which was not known while data recording. Describing the data is the first step in data mining, followed by summarizing its attributes (like standard deviation mean etc). After that data is reviewed using visual tools like charts and graphs and then meaningful relations are determined. In the data mining process, the steps of collecting, exploring and selecting the right data are critically important. User can analyze data from different dimensions categorize and summarize it. Data mining finds the correlation or patterns amongst the fields in large databases. Data mining has a great potential to help companies to focus on their important information in their data warehouse. It can predict the future trends and behaviors and allows the business to make more proactive and knowledge driven decisions. It can answer the business questions that were traditionally much time consuming to resolve. It scours databases for hidden patterns for finding predictive information that experts may miss it might lies beyond their expectations. Data mining is normally used to transform the data into information or knowledge. It is commonly used in wide range of profiting practices such as marketing, fraud detection and scientific discovery. Many companies already collect and refine their data. Data mining techniques can be implemented on existing platforms for enhance the value of information resources. Data mining tools can analyze massive databases to deliver answers to the questions. Some other terms contains similar meaning from data mining such as â€Å"Knowledge mining† or â€Å"Knowledge Extraction† or â€Å"Pattern Analysis†. Data mining can also be treated as a Knowledge Discovery from Data (KDD). Some people simply mean the data mining as an essential step in Knowledge discovery from a large data. The process of knowledge discovery from data contains following steps. * Data cleaning (removing the noise and inconsistent data) * Data Integration (combining multiple data sources) * Data selection (retrieving the data relevant to analysis task from database) * Data Transformation (transforming the data into appropriate forms for mining by performing summary or aggregation operations) * Data mining (applying the intelligent methods in order to extract data patterns) * Pattern evaluation (identifying the truly interesting patterns representing knowledge based on some measures) * Knowledge representation (representing knowledge techniques that are used to present the mined knowledge to the user) 2.2 Data Data can be any type of facts, or text, or image or number which can be processed by computer. Todays organizations are accumulating large and growing amounts of data in different formats and in different databases. It can include operational or transactional data which includes costs, sales, inventory, payroll and accounting. It can also include nonoperational data such as industry sales and forecast data. It can also include the meta data which is, data about the data itself, such as logical database design and data dictionary definitions. 2.3 Information The information can be retrieved from the data via patterns, associations or relationship may exist in the data. For example the retail point of sale transaction data can be analyzed to yield information about the products which are being sold and when. 2.4 Knowledge Knowledge can be retrieved from information via historical patterns and the future trends. For example the analysis on retail supermarket sales data in promotional efforts point of view can provide the knowledge buying behavior of customer. Hence items which are at most risk for promotional efforts can be determined by manufacturer easily. 2.5 Data warehouse The advancement in data capture, processing power, data transmission and storage technologies are enabling the industry to integrate their various databases into data warehouse. The process of centralizing and retrieving the data is called data warehousing. Data warehousing is new term but concept is a bit old. Data warehouse is storage of massive amount of data in electronic form. Data warehousing is used to represent an ideal way of maintaining a central repository for all organizational data. Purpose of data warehouse is to maximize the user access and analysis. The data from different data sources are extracted, transformed and then loaded into data warehouse. Users / clients can generate different types of reports and can do business analysis by accessing the data warehouse. Data mining is primarily used today by companies with a strong consumer focus retail, financial, communication, and marketing organizations. It allows these organizations to evaluate associations between certain internal external factors. The product positioning, price or staff skills can be example of internal factors. The external factor examples can be economic indicators, customer demographics and competition. It also allows them to calculate the impact on sales, corporate profits and customer satisfaction. Furthermore it allows them to summarize the information to look detailed transactional data. Given databases of sufficient size and quality, data mining technology can generate new business opportunities by its capabilities. Data mining usually automates the procedure of searching predictive information in huge databases. Questions that traditionally required extensive hands-on analysis can now be answered directly from the data very quickly. The targeted marketing can be an example of predictive problem. Data mining utilizes data on previous promotional mailings in order to recognize the targets most probably to increase return on investment as maximum as possible in future mailings. Tools used in data mining traverses through huge databases and discover previously unseen patterns in single step. Analysis on retail sales data to recognize apparently unrelated products which are usually purchased together can be an example of it. The more pattern discovery problems can include identifying fraudulent credit card transactions and identifying irregular data that could symbolize data entry input errors. When data mining tools are used on parallel processing systems of high performance, they are able to analy ze huge databases in very less amount of time. Faster or quick processing means that users can automatically experience with more details to recognize the complex data. High speed and quick response makes it actually possible for users to examine huge amounts of data. Huge databases, in turn, give improved and better predictions. 2.6 Descriptive and Predictive Data Mining Descriptive data mining aims to find patterns in the data that provide some information about what the data contains. It describes patterns in existing data, and is generally used to create meaningful subgroups such as demographic clusters. For example descriptions are in the form of Summaries and visualization, Clustering and Link Analysis. Predictive Data Mining is used to forecast explicit values, based on patterns determined from known results. For example, in the database having records of clients who have already answered to a specific offer, a model can be made that predicts which prospects are most probable to answer to the same offer. It is usually applied to recognize data mining projects with the goal to identify a statistical or neural network model or set of models that can be used to predict some response of interest. For example, a credit card company may want to engage in predictive data mining, to derive a (trained) model or set of models that can quickly identify tr ansactions which have a high probability of being fraudulent. Other types of data mining projects may be more exploratory in nature (e.g. to determine the cluster or divisions of customers), in which case drill-down descriptive and tentative methods need to be applied. Predictive data mining is goad oriented. It can be decomposed into following major tasks. * Data Preparation * Data Reduction * Data Modeling and Prediction * Case and Solution Analysis 2.7 Text Mining The Text Mining is sometimes also called Text Data Mining which is more or less equal to Text Analytics. Text mining is the process of extracting/deriving high quality information from the text. High quality information is typically derived from deriving the patterns and trends through means such as statistical pattern learning. It usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output. The High Quality in text mining usually refers to some combination of relevance, novelty, and interestingness. The text categorization, concept/entity extraction, text clustering, sentiment analysis, production of rough taxonomies, entity relation modeling, document summarization can be included as text mining tasks. Text Mining is also known as the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources. Linking together of the extracted information is the key element to create new facts or new hypotheses to be examined further by more conventional ways of experimentation. In text mining, the goal is to discover unknown information, something that no one yet knows and so could not have yet written down. The difference between ordinary data mining and text mining is that, in text mining the patterns are retrieved from natural language text instead of from structured databases of facts. Databases are designed and developed for programs to execute automatically; text is written for people to read. Most of the researchers think that it will need a full fledge simulation of how the brain works before that programs that read the way people do could be written. 2.8 Web Mining Web Mining is the technique which is used to extract and discover the information from web documents and services automatically. The interest of various research communities, tremendous growth of information resources on Web and recent interest in e-commerce has made this area of research very huge. Web mining can be usually decomposed into subtasks. * Resource finding: fetching intended web documents. * Information selection and pre-processing: selecting and preprocessing specific information from fetched web resources automatically. * Generalization: automatically discovers general patterns at individual and across multiple website * Analysis: validation and explanation of mined patterns. Web Mining can be mainly categorized into three areas of interest based on which part of Web needs to be mined: Web Content Mining, Web Structure Mining and Web Usage Mining. Web Contents Mining describes the discovery of useful information from the web contents, data and documents [10]. In past the internet consisted of only different types of services and data resources. But today most of the data is available over the internet; even digital libraries are also available on Web. The web contents consist of several types of data including text, image, audio, video, metadata as well as hyperlinks. Most of the companies are trying to transform their business and services into electronic form and putting it on Web. As a result, the databases of the companies which were previously residing on legacy systems are now accessible over the Web. Thus the employees, business partners and even end clients are able to access the companys databases over the Web. Users are accessing the application s over the web via their web interfaces due to which the most of the companies are trying to transform their business over the web, because internet is capable of making connection to any other computer anywhere in the world [11]. Some of the web contents are hidden and hence cannot be indexed. The dynamically generated data from the results of queries residing in the database or private data can fall in this area. Unstructured data such as free text or semi structured data such as HTML and fully structured data such as data in the tables or database generated web pages can be considered in this category. However unstructured text is mostly found in the web contents. The work on Web content mining is mostly done from 2 point of views, one is IR and other is DB point of view. â€Å"From IR view, web content mining assists and improves the information finding or filtering to the user. From DB view web content mining models the data on the web and integrates them so that the more soph isticated queries other than keywords could be performed. [10]. In Web Structure Mining, we are more concerned with the structure of hyperlinks within the web itself which can be called as inter document structure [10]. It is closely related to the web usage mining [14]. Pattern detection and graphs mining are essentially related to the web structure mining. Link analysis technique can be used to determine the patterns in the graph. The search engines like Google usually uses the web structure mining. For example, the links are mined and one can then determine the web pages that point to a particular web page. When a string is searched, a webpage having most number of links pointed to it may become first in the list. Thats why web pages are listed based on rank which is calculated by the rank of web pages pointed to it [14]. Based on web structural data, web structure mining can be divided into two categories. The first kind of web structure mining interacts with extracting patterns from the hyperlinks in the web. A hyperlink is a structural comp onent that links or connects the web page to a different web page or different location. The other kind of the web structure mining interacts with the document structure, which is using the tree-like structure to analyze and describe the HTML or XML tags within the web pages. With continuous growth of e-commerce, web services and web applications, the volume of clickstream and user data collected by web based organizations in their daily operations has increased. The organizations can analyze such data to determine the life time value of clients, design cross marketing strategies etc. [13]. The Web usage mining interacts with data generated by users clickstream. â€Å"The web usage data includes web server access logs, proxy server logs, browser logs, user profile, registration data, user sessions, transactions, cookies, user queries, bookmark data, mouse clicks and scrolls and any other data as a result of interaction† [10]. So the web usage mining is the most important task of the web mining [12]. Weblog databases can provide rich information about the web dynamics. In web usage mining, web log records are mined to discover the user access patterns through which the potential customers can be identified, quality of internet services can be enhanc ed and web server performance can be improved. Many techniques can be developed for implementation of web usage mining but it is important to know that success of such applications depends upon what and how much valid and reliable knowledge can be discovered the log data. Most often, the web logs are cleaned, condensed and transformed before extraction of any useful and significant information from weblog. Web mining can be performed on web log records to find associations patterns, sequential patterns and trend of web accessing. The overall Web usage mining process can be divided into three inter-dependent stages: data collection and pre-processing, pattern discovery, and pattern analysis [13]. In the data collection preprocessing stage, the raw data is collected, cleaned and transformed into a set of user transactions which represents the activities of each user during visits to the web site. In the pattern discovery stage, statistical, database, and machine learning operations a re performed to retrieve hidden patterns representing the typical behavior of users, as well as summary of statistics on Web resources, sessions, and users. 3 Classification 3.1 What is Classification? As the quantity and the variety increases in the available data, it needs some robust, efficient and versatile data categorization technique for exploration [16]. Classification is a method of categorizing class labels to patterns. It is actually a data mining methodology used to predict group membership for data instances. For example, one may want to use classification to guess whether the weather on a specific day would be â€Å"sunny†, â€Å"cloudy† or â€Å"rainy†. The data mining techniques which are used to differentiate similar kind of data objects / points from other are called clustering. It actually uses attribute values found in the data of one class to distinguish it from other types or classes. The data classification majorly concerns with the treatment of the large datasets. In classification we build a model by analyzing the existing data, describing the characteristics of various classes of data. We can use this model to predict the class/type of new data. Classification is a supervised machine learning procedure in which individual items are placed in a group based on quantitative information on one or more characteristics in the items. Decision Trees and Bayesian Networks are the examples of classification methods. One type of classification is Clustering. This is process of finding the similar data objects / points within the given dataset. This similarity can be in the meaning of distance measures or on any other parameter, depending upon the need and the given data. Classification is an ancient term as well as a modern one since classification of animals, plants and other physical objects is still valid today. Classification is a way of thinking about things rather than a study of things itself so it draws its theory and application from complete range of human experiences and thoughts [18]. From a bigger picture, classification can include medical patients based on disease, a set of images containing red rose from an image database, a set of documents describing â€Å"classification† from a document/text database, equipment malfunction based on cause and loan applicants based on their likelihood of payment etc. For example in later case, the problem is to predict a new applicants loans eligibility given old data about customers. There are many techniques which are used for data categorization / classification. The most common are Decision tree classifier and Bayesian classifiers. 3.2 Types of Classification There are two types of classification. One is supervised classification and other is unsupervised classification. Supervised learning is a machine learning technique for discovering a function from training data. The training data contains the pairs of input objects, and their desired outputs. The output of the function can be a continuous value which can be called regression, or can predict a class label of the input object which can be called as classification. The task of the supervised learner is to predict the value of the function for any valid input object after having seen a number of training examples (i.e. pairs of input and target output). To achieve this goal, the learner needs to simplify from the presented data to hidden situations in a meaningful way. The unsupervised learning is a class of problems in machine learning in which it is needed to seek to determine how the data are organized. It is distinguished from supervised learning in that the learner is given only unknown examples. Unsupervised learning is nearly related to the problem of density estimation in statistics. However unsupervised learning also covers many other techniques that are used to summarize and explain key features of the data. One form of unsupervised learning is clustering which will be covered in next chapter. Blind source partition based on Independent Component Analysis is another example. Neural network models, adaptive resonance theory and the self organizing maps are most commonly used unsupervised learning algorithms. There are many techniques for the implementation of supervised classification. We will be discussing two of them which are most commonly used which are Decision Trees classifiers and Naà ¯ve Bayesian Classifiers. 3.2.1 Decision Trees Classifier There are many alternatives to represent classifiers. The decision tree is probably the most widely used approach for this purpose. It is one of the most widely used supervised learning methods used for data exploration. It is easy to use and can be represented in if-then-else statements/rules and can work well in noisy data as well [16]. Tree like graph or decisions models and their possible consequences including resource costs, chance event, outcomes, and utilities are used in decision trees. Decision trees are most commonly used in specifically in decision analysis, operations research, to help in identifying a strategy most probably to reach a target. In machine learning and data mining, a decision trees are used as predictive model; means a planning from observations calculations about an item to the conclusions about its target value. More descriptive names for such tree models are classification tree or regression tree. In these tree structures, leaves are representing class ifications and branches are representing conjunctions of features those lead to classifications. The machine learning technique for inducing a decision tree from data is called decision tree learning, or decision trees. Decision trees are simple but powerful form of multiple variable analyses [15]. Classification is done by tree like structures that have different test criteria for a variable at each of the nodes. New leaves are generated based on the results of the tests at the nodes. Decision Tree is a supervised learning system in which classification rules are constructed from the decision tree. Decision trees are produced by algorithms which identify various ways splitting data set into branch like segment. Decision tree try to find out a strong relationship between input and target values within the dataset [15]. In tasks classification, decision trees normally visualize that what steps should be taken to reach on classification. Every decision tree starts with a parent node called root node which is considered to be the parent of every other node. Each node in the tree calculates an attribute in the data and decides which path it should follow. Typically the decision test is comparison of a value against some constant. Classification with the help of decision tree is done by traversing from the root node up to a leaf node. Decision trees are able to represent and classify the diverse types of data. The simplest form of data is numerical data which is most familiar too. Organizing nominal data is also required many times in many situations. Nominal quantities are normally represented via discrete set of symbols. For example weather condition can be described in either nominal fashion or numeric. Quantification can be done about temperature by saying that it is eleven degrees Celsius or fifty two degrees Fahrenheit. The cool, mild, cold, warm or hot terminologies can also be sued. The former is a type of numeric data while and the latter is an example of nominal data. More precisely, the example of cool, mild, cold, warm and hot is a special type of nominal data, expressed as ordinal data. Ordinal data usually has an implicit assumption of ordered relationships among the values. In the weather example, purely nominal description like rainy, overcast and sunny can also be added. These values have no relationships or distance measures among each other. Decision Trees are those types of trees where each node is a question, each branch is an answer to a question, and each leaf is a result. Here is an example of Decision tree. Roughly, the idea is based upon the number of stock items; we have to make different decisions. If we dont have much, you buy at any cost. If you have a lot of items then you only buy if it is inexpensive. Now if stock items are less than 10 then buy all if unit price is less than 10 otherwise buy only 10 items. Now if we have 10 to 40 items in the stock then check unit price. If unit price is less than 5 £ then buy only 5 items otherwise no need to buy anything expensive since stock is good already. Now if we have more than 40 items in the stock, then buy 5 if and only if price is less than 2 £ otherwise no need to buy too expensive items. So in this way decision trees help us to make a decision at each level. Here is another example of decision tree, representing the risk factor associated with the rash driving. The root node at the top of the tree structure is showing the feature that is split first for highest discrimination. The internal nodes are showing decision rules on one or more attributes while leaf nodes are class labels. A person having age less than 20 has very high risk while a person having age greater than 30 has a very low risk. A middle category; a person having age greater than 20 but less than 30 depend upon another attribute which is car type. If car type is of sports then there is again high risk involved while if family car is used then there is low risk involved. In the field of sciences engineering and in the applied areas including business intelligence and data mining, many useful features are being introduced as the result of evolution of decision trees. * With the help of transformation in decision trees, the volume of data can be reduced into more compact form that preserves the major characteristic

Wednesday, November 13, 2019

The Great Depression Essay -- American History

The Great Depression was one of the lowest times in American history. Although this depression brought great poverty to some areas, most were not even phased by it. For some it brought extreme poverty for others who had little money invested in banks or into the stock market, nothing really changed. It even seemed that those who were impacted the least, their changes would not occur until after the Depression was over. In fact some never even knew that there was a depression going on until it filtered down through the tabloids. This economic tragedy was forever changed by the Election of 1932 which eventually brought on the New Deal of legislative programs which would forever change America. The 1920s where a time when America was flourishing with businesses in the North and farming in the South and where the stock exchange could make one rich if the right investment was made. The South presented a different type of wealth-not money, but hard labor and excessive amounts of food. Over time with the stock market rising at record highs, and field crops in excess of amounts of over cropped produce, a barrier was about to be struck hard. Eventually it did in 1929 with to much stock and not enough buyers to purchase it. This led to a great deal of panic in the North. Peoples’ responses were to quickly pull out of the investments they had made, and running to their banks to withdraw their funds. Even people who had money in the bank before it shut down had no chance in getting their money there was no insurance to protect their money loss. It had not yet been devised by the government. This resulted in money being taken out of circulation, which caused high inflat ion to occur. The North’s economy had hit a jolting halt--t... ...onal system gave those who believed they could only provide through manual labor other opportunities to prosper. These changes brought more people into the South which would forever change not just the state of Tennessee, but the way the South is today. The New Deal would have a lasting affect on the economy, and would eventually bring America out of the Great Depression. President Roosevelt would be named as one of the greatest presidents of all time. Tennessee would forever change the way the South would be looked upon, due to it being the foundation of the Tennessee Valley Authority (TVA). As for the stock market, it would not recover until the 1950s. The banks would eventually become regulated by the government, and be forced by Roosevelt to acquire insurance on people’s money for up to $100,000. These actions paved the way for what is now a better America.

Monday, November 11, 2019

Emily Dickinson Essay

Emily Dickinson’s great skill and unparalleled creativity in playing with words and their connotations in her attempt to convey to the reader the power of a book are evident. In this poem, she is considering the power of books or of poetry to carry us away from our immediate surroundings to a world of imagination. Her poem is suffused with (full of) metaphors, as she is desirous of likening a book to various means of transportation. To do this she alludes (allusion-noun) directly to concrete objects such as â€Å"frigate,† â€Å"coursers† and â€Å"chariot,† which carry archaic (ancient) connotations. The difficulty inherent in the use of these vehicles has to do with the reader’s knowledge concerning the properties and characteristics evinced by a â€Å"frigate,† â€Å"coursers† and a â€Å"chariot. † The poetess associates the swiftness of a â€Å"frigate,† â€Å"coursers† and a â€Å"chariot†Ã¢â‚¬â€as well as their use to explore new lands and seas—with the power of a book or poetry to usher (lead, guide) us into another dimension, perhaps shrouded (covered) in mystery but definitely rewarding. If the reader is not acquainted (familiar) with these means of transportation that reigned supreme, so to speak, centuries ago, he / she is denied access to the meaning that the poet seeks to impart by means of these vehicles. But Emily Dickinson does not limit herself to these vehicles alone; the whole poem is reminiscent (suggestive) of a past era when people used â€Å"frigate[s],† â€Å"coursers† and â€Å"chariot[s]† to travel â€Å"lands away. The words â€Å"traverse,† (to cross an area of land or water) â€Å"oppress,† (stress) and â€Å"frugal,† (simple and inexpensive) with which the poem is interspersed—all of them are of Latin origin, thus lending it a formal hue. She has been careful to choose kinds of transportation and names for books that have romantic connotations. â€Å"Frigate† suggest exploration and adventure; â€Å"coursers† beauty, spirit and speed; â€Å"chariot,† speed and ability to go th rough air as well as on land. Chariot reminds us of the myth of Phaethon, who tried to drive the chariot of Apollo (Greek god of sun), and of Aurora (Greek goddess of dawn) with her horses. How much of the meaning of the poem comes from this selection of vehicles and words is apparent if we try to substitute steamship for â€Å"frigate,† horses for â€Å"coursers,† and streetcar for â€Å"chariot. † How would the poem sound if, instead of likening a book to a â€Å"frigate,† â€Å"coursers,† and a â€Å"chariot,† one resolved to use a â€Å"Mercedes Benz,† a â€Å"GMC† or a â€Å"Porsche† to convey the same meaning, that of speed and swiftness? Emily Dickinson’s shrewdness in selecting the most appropriate diction is superb and undoubtedly holds up a mirror for the reader to see what it is that she had in mind when writing the poem. On a more technical note, related to the rhyme scheme, it is obvious that the poem is written in open form or in free verse (from the French vers libre), as indicated by the lack of a regular rhyme pattern, as a parallel to â€Å"prancing poetry† or the power of a book to carry you to foreign â€Å"lands† where no man has ever trod before. Liberated from the confines and shackles of rhyme, Emily Dickinson’s â€Å"There is no frigate like a book† makes a permanent impression on the reader, as it â€Å"entangles†¦ a part of the Divine essence,† to quote W. B. Yeats. Allusions in There is no Frigate like a Book 1. The story of Phaeton In Greek mythology, Phaeton or Phaethon was the son of Helios (Phoebus). Perhaps the most famous version of the myth is given us through Ovid in his Metamorphoses (Book II). The name â€Å"Phaeton† means the â€Å"shining†. In the version of the myth told by Ovid in the Metamorphoses, Phaeton ascends into heaven, the home of his suspected father. His mother Clymene had boasted that his father was the sun-god Apollo. Phaeton went to his father who swore by the river Styx to give Phaeton anything he should ask for in order to prove his divine paternity. Phaeton wanted to drive his chariot (the sun) for a day. Though Apollo tried to talk him out of it by telling him that not even Zeus (the king of gods) would dare to drive it, the chariot was fiery hot and the horses breathed out flames. Phaeton was adamant. When the day came, Apollo anointed Phaeton’s head with magic oil to keep the chariot from burning him. Phaeton was unable to control the fierce horses that drew the chariot as they sensed a weaker hand. First it veered too high, so that the earth grew chill. Then it dipped too close, and the vegetation dried and burned. He accidentally turned most of Africa into desert; bringing the blood of the Ethiopians to the surface of their skin, turning it black. â€Å"The running conflagration spreads below. But these are trivial ills: whole cities burn, And peopled kingdoms into ashes turn. [3] Rivers and lakes began to dry up, Poseidon rose out of the sea and waved his trident in anger at the sun, but soon the heat became even too great for him and he dove to the bottom of the sea. Eventually, Zeus was forced to intervene by striking the runaway chariot with a lightning bolt to stop it, and Phaethon plunged into the river Eridanos. Apollo, stricken with grief, refused to drive his c hariot for days. Finally the gods persuaded him to not leave the world in darkness. Apollo blamed Zeus for killing his son, but Zeus told him there was no other way. This story has given rise to two latter-day meanings of â€Å"phaeton†: one who drives a chariot or coach, especially at a reckless or dangerous speed, and one that would or may set the world on fire 2. (Aurora, goddess of the dawn, equivalent to the Greek goddess Eos ) In Roman mythology, Aurora, goddess of the dawn, renews herself every morning and flies across the sky in her chariot, announcing the arrival of the sun. Her parentage was flexible: for Ovid, she could equally be Pallantis, signifying the daughter of Pallas,[1] or the daughter of Hyperion. 2] She has two siblings, a brother (Sol, the sun) and a sister (Luna, the moon).. In Roman mythology, Aurora, goddess of the dawn, renews herself every morning and flies across the sky, announcing the arrival of the sun. Her parentage was flexible: for Ovid, she could equally be Pallantis, signifying the daughter of Pallas,[1] or the daughter of Hyperion. [2] She has two siblings, a brother (Sol, the sun) and a sister (Luna, the moon). Rarely Roman writers[3] imitated Hesiod and later Greek poets and made the Anemoi, or Winds, the offspring of the father of the stars Astraeus, with Eos/Aurora.

Saturday, November 9, 2019

Religious Studies Essays

Religious Studies Essays Religious Studies Essay Religious Studies Essay Essay Topic: Nashville Name: Course: Instructor: Date: Religious Studies The relationship between religion and literature is illustrated where several themes are integrated with the views of religion. It also focuses on explaining the connection between people’s concerns, related, religious inclinations and the literary styles used to articulate the information. Several themes have been illustrated in various sources containing religious information including films, books and different articles. However, this document will focus on discussing several themes in the religious movie known as Fireproof. Fireproof is a movie that narrates about a firefighter who deals with the problem of his wife wanting to divorce him. However, his father pleads with him to postpone the divorce for a period of forty days and offers him a book known as the Love Dare (Richard 13). The book is meant to solve the firefighter’s marriage problems with his wife, because it informs about the temperament of true love and gives steps of guidance in solving relationship issues. One of the themes found in Fireproof is forgiveness, and it relates the scene where the firefighter is forgiven by his wife, to the religious forgiveness that an individual experiences when he is in a relationship with God (Solomon 23). It illustrates the importance of an individual’s marriage or relationship to God as a valuable investment because through this relationship, all other things in life thrive. Therefore, fireproofing or safeguarding this worthy union protects the believer from being tempted to transgress. In addition, the believers become unionized with God and hence are able to maintain peace with their enemies (Solomon 23). The next theme found in Fireproof is faith, and it is illustrated where the firefighter decided to trust in the book his father gave him as he was about to give up with his marriage and divorce his wife (Anker 6). The firefighter made the choice to believe that he would find some hope once he begun reading the book about saving his marriage. Faith is also witnessed at the concluding scene, whereby he decides to be a born again Christian. He runs to the backyard of his father’s house and weeps on a statue of a cross as his father meets him to comfort and pray for him. He finally takes a step of faith by giving his life to Jesus Christ with the hope of experiencing peace and happiness in his life, marriage and family. The theme of unconditional love is illustrated at the concluding scene where the firefighter experiences divine forgiveness. The next day after giving his life to Christ, the man appears to be very happy and peaceful as compared to other times when he seemed to be in a cranky mood. Therefore, this means that the love of God is unconditional and limitless since anyone is allowed to experience it and hence He wants people to live through honoring in order to experience that love (Chris and Rao, M.D 262) The other theme in the Fireproof movie is the theme of addiction. It is displayed in the scene whereby the firefighter is addicted to accessing internet pornography. As a result, his wife complains several times about this habit and it ends up being one of the reasons for wanting to divorce him. The man struggles with quitting the habit but becomes unsuccessful until the day he gives his life to Jesus Christ (Stephen 57). From that day, he develops a profound strength and faith that helps him overcome this habit. For example, instead of using the computer when he arrives from work, he forms an interest in reading the bible. Biblical principles have also been used as a form of literature in Fireproof based on the strong foundation of marriage. The movie explains the principle on how every relationship has to face certain challenges because different circumstances in life always find a way to interfere in relationships. However, these problems faced by couples are meant to strengthen the marital relationship since it enables each person in the commitment to display positive strong qualities including love, patience and persistence that makes it easier to overcome these challenges. In addition, these qualities determine the commitment of a spouse to build a strong base on the relationship (Gabriel 121). The next theme shown in the film focuses on obedience. It reflects the Christian view of obedience by explaining that if a person obeys God’s commands, he is likely to overcome any challenge in life that is thrown at him. For example, after the firefighter decided to trust in God by leaving his old habits like his addiction to pornography, he was able to appreciate and treat his wife with more love and respect and as a result, their marriage bond became stronger despite facing the challenge of losing their younger son to Cancer (Lynn and Mark 125). The next biblical principle illustrated in the film, is on how man has been ordered by God to love his wife. In the scene where the firefighter became a believer, he realized that he was peaceful and happy once he learned to appreciate and love his wife than in the past when he was always complaining and being moody to her. This shows that for him to experience God’s forgiveness, he had to take a step of loving his wife and abandoning his old gruesome ways of treating his wife. The film also illustrates the dynamic of family through the aspect of love and togetherness. For example, after the reconciliation of the married couple, they focus on loving their children by praying and convincing their older son to give his life to Jesus Christ in order to experience the same peace and happiness they felt (Catt 218). In addition, it shows that family is built on the strong foundation of marriage since the parents were able to focus more on their children when their marriage was reconciled than before when they were facing challenges (Douglas 142). Reference Anker, Roy M. Of Pilgrims and Fire: When God Shows Up at the Movies. Grand Rapids, Mich: W.B. Eerdmans Pub. Co, 2010. Print. Catt, Michael C. The Power of Desperation: Breakthroughs in Our Brokenness. Nashville, Tenn: B H Pub. Group, 2009. Print. Connelly, Richard. Lost Art of Romance: How to Romance a Lady. S.l.: Trafford On Demand Pub, 2009. Print. Cowan, Douglas E. Sacred Terror: Religion and Horror on the Silver Screen. Waco, Tex: Baylor University Press, 2008. Print. Kendrick, Stephen. Holy Clues: The Gospel According to Sherlock Holmes. New York: Vintage Books, 2000. Print. Solomon, Stephannie E. R. Living with the King: Meditations That Teach, Transform and Transcend. S.l.: Authorhouse, 2009. Print. Suszek Lynne and Suszek Mark. First Wash the Inside. Nashville, U.S.A: Lockman Foundation, 2009. Print. Mckee, Gabriel. The Gospel According to Science Fiction: From the Twilight Zone to the Final Frontier. Louisville: Westminster John Knox Press, 2007. Print.

Wednesday, November 6, 2019

Free Essays on Genocide Of Indigineous Australia

Assimilation and Genocide of Indigenous Australia â€Å"Anyone who closes his eyes to the past is blind to the present. Whoever refuses to remember the inhumanity is prone to risks of re-infection.† Richard Von Weiszaecker, former President of the Federal Republic of Germany Very few people use the word genocide when discussing the strife the Australian indigenous people endured. Almost all historians of the Aboriginal experience, black and white, avoid it. Typically, they write about pacifying, killing, cleansing, excluding, exterminating, starving, poisoning, shooting, beheading, sterilizing, exiling, but they avoid genocide. Could it be that most understand genocide on one level only? For many, and especially Australians, genocide is something of the Germans, Cambodians and Hutus, not the Australians. As for the rest of the world, the experience of the indigenous people of Australia is not as familiar as the images of Auschwitz or the Killing Fields of Cambodia. Clearly, there is no Australian Dachau. In this paper, I will examine the experience of the indigenous people of Australia and the lack of action on behalf of the Australian government. I will also investigate the extent to which the policies and practices of the colonial Australian governmen t in the period 1838 to 1911 can be classified as ‘genocide’, using the United Nations Genocide Convention of 1948 (which Australia ratified in 1951) as a guideline, as well as comparing the indigenous Australian experience under British State control with that of the Jews under Nazi rule during the Holocaust. As Australian federal law stands to this date, it is not illegal to commit domestic genocide in Australia. Although the Australian government signed the international United Nations Genocide Convention Bill in 1948, and ratified it in 1951, since then none of its provisions have been implemented into federal law. What is genocide? Firstly one must understand exact... Free Essays on Genocide Of Indigineous Australia Free Essays on Genocide Of Indigineous Australia Assimilation and Genocide of Indigenous Australia â€Å"Anyone who closes his eyes to the past is blind to the present. Whoever refuses to remember the inhumanity is prone to risks of re-infection.† Richard Von Weiszaecker, former President of the Federal Republic of Germany Very few people use the word genocide when discussing the strife the Australian indigenous people endured. Almost all historians of the Aboriginal experience, black and white, avoid it. Typically, they write about pacifying, killing, cleansing, excluding, exterminating, starving, poisoning, shooting, beheading, sterilizing, exiling, but they avoid genocide. Could it be that most understand genocide on one level only? For many, and especially Australians, genocide is something of the Germans, Cambodians and Hutus, not the Australians. As for the rest of the world, the experience of the indigenous people of Australia is not as familiar as the images of Auschwitz or the Killing Fields of Cambodia. Clearly, there is no Australian Dachau. In this paper, I will examine the experience of the indigenous people of Australia and the lack of action on behalf of the Australian government. I will also investigate the extent to which the policies and practices of the colonial Australian governmen t in the period 1838 to 1911 can be classified as ‘genocide’, using the United Nations Genocide Convention of 1948 (which Australia ratified in 1951) as a guideline, as well as comparing the indigenous Australian experience under British State control with that of the Jews under Nazi rule during the Holocaust. As Australian federal law stands to this date, it is not illegal to commit domestic genocide in Australia. Although the Australian government signed the international United Nations Genocide Convention Bill in 1948, and ratified it in 1951, since then none of its provisions have been implemented into federal law. What is genocide? Firstly one must understand exact...

Monday, November 4, 2019

Critically assess the arguments for and against adopting a stakeholder Essay

Critically assess the arguments for and against adopting a stakeholder perspective - Essay Example While the proponents of the shareholder perspective have their views so do the proponents of stakeholder perspective. In light of these two approaches, this paper will critically analyze the stakeholder perspective. First, the shareholder approach focused primarily on the owners or individuals holding the shares in the company with little regard of the rest. That is to say, all decisions were taken with the concerns of the owners first then others’ later. Conversely, the stakeholder perspective advocates for recognition of other organs in the business whose influence is also vital. These include the customers, employees, suppliers, governmental bodies and now the owners. According to the contents in Freeman’s book, the traditional shareholder was not only unhelpful to the owners but also to the business as well since it failed to address the customer needs (Miles 2012). Analysing this notion seems to reveal some truths in it though it also has some demerits, which could lead to negative impacts on the society. Lately, many companies have adopted stakeholder theory as a way of increasing profitability. The approach seeks to place the people seen as important to a business in a higher position (Miles 2012). That is to say, all actions taken by a corporate are focused on benefitting these people first. In this approach, the major people regarded highly include the owners, shareholders, customers, suppliers and in some case even competitors. Since this approach focuses on the most important stakeholder in the business- the customer to be precise- a number of benefits are evident. Freeman suggested that whenever a business applies the stakeholders even the shareholders’ interests are safeguarded. Additionally, when this approach is adopted, the business will last longer and even exhibit performance (Miles 2012). This is because businesses are able to analyze the needs by various stakeholders

Saturday, November 2, 2019

Contract II Coursework Question Essay Example | Topics and Well Written Essays - 2000 words

Contract II Coursework Question - Essay Example In order to circumvent a contract on the grounds of frustration, it has to be established that the events had not only made it much more difficult to comply with the contractual obligations, but that they had also destroyed its very foundation. The BBL Company should have made alternate arrangements to contend with the problems arising from the failure of machinery. As per the case law discussed in the sequel, contractual terms that become more burdensome, cannot provide a defence of frustration of the contract. The BBL Company had breached the implied terms stipulated by the Supply of Goods and Services Act 1982, as it had failed to complete the work within the specified time. In Bush v Trustees of Port and Town of Whitehaven, it was held by the court that the contractual terms had changed sufficiently, for the contractor to claim an additional amount for the inordinate delay.3 This decision was censured in the Davis Contractors case, and it was opined that a party to a contract could not claim relief from a contractual obligation, merely on the grounds that the contract had become more onerous to perform.4 Consequently, a quantum meruit arises only when the circumstances change to such an extent that the contract is frustrated. The mere fact that the contract has become more expensive or has changed appreciably does not constitute frustration of the contract.5 As a result, the goods had to be sent through a much longer route. This doubled the cost, and the appellants contended that the contract had been frustrated. The House of Lords ruled that there was no frustration, as the shipping route had not been specified.7 As such, it was held that a mere increase in cost did not constitute grounds for the frustration of a contract. In Davis Contractors Ltd v Fareham UDC, a contract had been formed for the construction of a number of houses.