Tuesday, October 29, 2019

Effectiveness of Public Private Partnerships Cooperation Between Essay

Effectiveness of Public Private Partnerships Cooperation Between Business and The Government - Essay Example In such sort of relationship, there is a co-dependency of both the parties involved. The responsibility of government is of primal importance as it is solely the duty of government to provide the private sector with public services. Hence, governments are making use of such partnerships thereby getting massive financial benefits from public private partnerships. Apart from providing value for money, public private partnerships are also useful for providing newly formulated designs, advanced public services, and an excessively rapid process of construction. Some governments are facing alleged accusations of using public private partnership for political motives. It is a substantial fact that public private partnership function properly under the mutual affinity between the private and public sector thereby making pp services as non-privatized and non-nationalized services. In addition to it, such partnerships offer mutual dependency upon each other, which allows them to attain remarka ble results. The core issue related to public private partnership is budgeting for it entirely depends upon the financing of the private group. For instance, if finances and cost of the project are the responsibilities of the private sector, the government does not feel the necessity to take control of matters related to finance. However, when funds are un-available, government feels the need to meet the financial requirement by imposing public taxes. Trans-European Network for transport is proposed to alleviate problems related to financing in public private partnership. This network is constructed to meet the requirements of recession and inflation. As with the descending economical condition of many countries,...This paper outlines the growing importance of successful cooperation between public and private sector in form of public private partnerships in the world today. The well designed project not only helps the parties to stay committed to their work but it also offers them a scope of improvement than the previous project, the risk factors are reduced, reliability is higher and the efficiency of the project is also higher. One of the biggest advantage of public private partnership is that in case of any mishap or loss in the project; the expenses are shared by both the parties involved. The risks of bigger losses are minimized. Another significant advantage of these partnerships is achieved by the fact that it is essentially duty of the government to decide the user charges on which a normal person can use the infrastructure. Due to the current economic recession, the significance of public-private partnership has been increased as now governments are facing grave economical pressure and private-public partnership has become a strong medium to facilitate the building of infrastructure because the investors have been encouraged to invest in the building of infrastructure that may include hospitals, re-creational parks, educational institutions, etc. Apart from providing value for money, public private partnerships are also useful for providing newly formulated designs, advanced public services, and an excessively rapid process of construction. There are many forms of public private partnerships depending upon the business and finance controller of the project as sometimes it is solely in control of the public sector or in control of the private sector the other times.

Sunday, October 27, 2019

Constructing Social Knowledge Graph from Twitter Data

Constructing Social Knowledge Graph from Twitter Data   Yue Han Loke 1.1 Introduction The current era of technology allows its users to post and share their thoughts, images, and content via networks through different forms of applications and websites such as Twitter, Facebook and Instagram. With the emerging of social media in our daily lives and it is becoming a norm for the current generation to share data, researchers are starting to perform studies on the data that could be collected from social media [1] [2].The context of this research will be solely dedicated to Twitter data due to its publicly available wealth of data and its public Stream API. Twitters tweets can be used to discover new knowledge, such as recommendations, and relationships for data analysis. Tweets in general are short microblogs consisting of maximum 140 characters that can consists of normal sentences to hashtags and tags with @, other short abbreviation of words (gtg, 2night), and different form of a word (yup, nope). Observing how tweets are posted shows the noisy and short lexical natu re of these texts. This presents a challenge to the flexibility of Twitter data analysis. On the other hand, the availability of existing research conducted on entity extraction and entity linking has decreased the gap between entities extracted and the relationships that could be discovered. Since 2014, the introduction of the Named Entity rEcognition and Linking (NEEL) Challenge [3] has proved the significance of automated entity extraction, entity linking and classification appearing in different event streams of English tweets in the research and commercial communities to design and develop systems that could solve the challenging nature in tweets and to mine semantics from them. 1.2 Project Aim The focus of this research aims to construct a social knowledge graph (Knowledge Base) from Twitter data. A knowledge graph is a technique to analyse social media networks using the method of mapping and measurement for both relationships and information flows among group, organizations, and other connected entities in social networks [4]. A few tasks are required to successfully create a knowledge graph based on Twitter data A method to aid in the construction of knowledge graph is by extracting named entitiessuch as persons, organizations, locations, or brands from the tweets [5]. In the domain of this research, the named entity to be referenced in the tweet is defined as a proper noun or acronym if it is found in the NEEL Taxonomy in the Appendix A of [3], and is linked to an English DBpedia [6] referent and a NIL referent. The second component in creating a social knowledge graph is to utilize those extracted entities and link them to their respective entities in a knowledge base. For example, Tweet: The ITEE department is organizing a pizza gettogether at UQ. #awesome ITEE refers to an organization and UQ refers to an organization as well. The annotation for this is [ITEE, organization, NIL1], where NIL1 refers to the unique NIL referent describing the real-world entity ITEE that does not have the equivalent entry in DBpedia and [UQ, Organization, dbp:University_of_Queensland] which represents the RDF triple (subject, predicate, object). 1.3 Project Goals Firstly, getting the Twitter tweets. This can be achieved by crawling Twitter data using Public Stream API[1] available in the Twitter developer website. The Public Stream API allows extraction of Twitter data in real time. Next, entity extraction and typing with the aid of a specifically chosen information extraction pipeline called TwitIE[2] open-source and specific to social media and has been tested most extensively on microblog sentences. This pipeline receives the tweets as input and recognises the entities in the same tweet. The third task is to link those entities mined from tweets to the entities in the available knowledge base. The knowledge base that has been selected for the context of this project is DBpedia. If there is a referent in DBpedia, the entity extracted will be linked to that referent. Thus, the entity type is retrieved based on the category received from the knowledge base. In the event of the unavailability of a referent, a NIL identifier is given as shown in section 1.2. The selection of an entity linking system with the appropriate entity disambiguation and candidate entity generation that receives the extracted entities from the same Tweet and produce a list with all the candidate entities in the knowledge base. The task is to accurately link the correct entity extracted to one of the candidates. The social knowledge graph is an entity-entity graph combining two extracted sources of entities. The first is the analysis of the co-occurrence of those entities in same tweet or same sentence. Besides that, the existing relationships or categories extracted from DBpedia. Thus, the project aims to combine the extraction of co-occurrence of extracted entities and the extracted relationships to create a social knowledge graph to unlock new knowledge from the fusion of the two data sources. Named Entity Recognition (NER), Information Extraction (IE) are generally well researched in the domain of longer text such as newswire. However, overall, microblogs are possibly the hardest kind of content to process. For Twitter, some methods have been proposed by the research community such as [7] that uses a pipeline approach to perform the first tokenisation and POS tagging and topic models were used to find named entities. [8] propose a gradient-descent graph-based method for doing joint text normalisation and recognition, reaching 83.6% F1 measure. Besides that, entity linking in knowledge graphs have been studied in [9] using graph-based method by collectively gather the referent entities of all named entities in the same document and by modelling and exploiting the global interdependence between Entity Linking decisions. However, the combination of NER, and Entity Linking in Twitter tweets is still a new area of research since the NEEL challenge was first established in 2013 . Based on the evaluation conducted in [10] on the NEEL challenge, lexical similarity mention detection strategy that exploit the popularity of the entities and apply a distance similarity functions to rank entities efficiently, and n-gram [11] features are used. Besides that, Conditional Random Forest (CRF) [12] is another mentioned entity extraction strategy. In the entity detection context, graph distances and various ranking features were used. 2.1. Twitter crawling [13] defined the public Twitter Streaming API provides the ability of collecting a sample of user tweets. Using the statuses/filter API provides a constant stream of public Tweets. Multiple optional parameters may be specified such as language and locations. Applying the method CreateStreamingConnection,a POST request to the API has the capability of returning the public statuses as a stream. The rate limit of the Streaming API allows each application to submit up to 5,000 Twitter. [13] Based on the documentation, Twitter currently allows the public to retrieve at most a 1% sample of their data posted on Twitter at a specific time. Twitter will begin to return the sample data to the user when the number of tweets reaches 1% of all tweets on Twitter. According to [14] research comparing Twitter Streaming API and Twitter Firehouse, the final results of the Streaming API depends strongly on the coverage and the type of analysis that the researcher wishes to perform. For example, the researchers found that if given a set of parameters and the number of tweets matching them increases, the coverage of the Streaming API is reduced. Thus, if the research is concerning a filtered content, the Twitter Firehose would be a better choice with regards to its drawback of restrictive cost. However, since our project requires random sampling of Twitter data without filters except for English language, Twitter Streaming API would be an appropriate choice since it is freely available. 2.2. Entity Extraction [15] suggested an open-source pipeline, called TwitIE which is solely dedicated for social media components in GATE [16]. TwitIE consists for 7 parts: tweet import, language identification, tokenisation, gazetteer, sentence splitter, normalisation, part-of-speech tagging, and named entity recogniser. Twitter data is delivered from the Twitter Streaming API in JSON format. TwitIE included a new Format_Twitter plugin in the most recent GATE codebase which converts the tweets in JSON format automatically into GATE documents. This converter is automatically associated with documents names that end in .json, if not text/x-json-twitter should be specified. The TwitIE system uses TextCat a language processing and identification algorithm for its language identification. It has the capability to provide reliable tweet language identification for tweets written in English using the English POS tagger and named entity recogniser. Tokenisation oversees different characters, class sequence and rules. Since the TwitIE system is dealing with microblogs, it treats abbreviations and URLs as one token each by following the Ritters tokenisation scheme. Hashtags and user mentions are considered as two tokens and is covered by a separate annotation hashtags. Normalisation in TwitIE system is divided into two task: the identification of orthographic errors and correction of the errors found. The TwitIE Normaliser is designed specific to social media. TwitIE reuses the ANNIE gazetteer lists which contain lists such as cities, organisations, days of the week, etc. TwiTie uses the adapted version of the Stanford Part-of speech tagger which is tweets tagged with Penn TreeBank(PTB) tagset trained. The results of using the combination of normalisation, gazetteer name lookup, and POS tagger, the performance was increased to 86.93%. It was further increased to 90.54% token accuracy when the PTB tagset was used. Named entity recognition in TwitIE has a +30% absolute precision and +20% abso lute performance increase as compare to ANNIE, mainly respect to date, Organizations and Person. [7] proposed an innovative approach to distant supervision using topic models that pulls large amount of entities gathered from Freebase, and large amount of unlabelled data. Using those entities gathered, the approach combines information about an entitys context across its mentions. T-NER POS Tagging system called T-POS has added new tags for Twitter specific phenomenal retweets such as usernames, urls and hashtags. The system uses clustering to group together distributionally similar words for lexical variations and OOV words. T-POS utilizes the Brown Clusters and Conditional Random Fields. The combination of both features results in the ability to model strong dependencies between adjacent POS tags and make use of highly correlated features. The results of the T-POS are shown on a 4-fold cross validation over 800 tweets. It is proved that T-POS outperforms the Standford tagger, obtaining a 26% reduction in error. Besides that, when trained on 102K tokens, there is an error reduct ion of 41%. The system includes shallow parsing which can identify non-recursive phrases such as noun, verb and prepositional phrases in text. T-NERs shallow parsing component called T-CHUNK, obtained a better performance at shallow parsing of tweets as compared against the off the shelf OpenNLP chunker. As reported, a 22% reduction in error. Another component of the T-NER is the capitalization classifier, T-CAP, which analyse a tweet to predict capitalization. Named entity recognition in T-NER is divided into two components: Named Entity Segmentation using T-SEG, and classifying named entities by applying LabeledLDA. T-SEG uses IOB encoding on sequence-labelling task to represent segmentations. Furthermore, Conditional Random Fields is used for learning and inference. Contextual, dictionary and orthographic features: a set of type lists is included in the in-house dictionaries gathered from Freebase. Additionally, outputs of T-POS, T-CHUNK and T-CAP, and the Brown clusters are used to generate features. The outcome of the T-SEG as stated in the research paper, Compared with the state-of-the-art news-trained Stanford Named Entity Recognizer. T-SEG obtains a 52% increase in F1 score. To address the issues of lack of context in tweets to identify the types of entities they contain and excessive distinctive named entity types present in tweets, the research paper presented and assessed a distantly supervised approach based on LabeledLD. This approach utilizes modelling of every entity as a combination of types. This allows information about an entitys distribution over types to be shared across mentions, naturally handling ambiguous entity strings whose mentions could refer to different types. Based on the empirical experiments conducted, there is a 25% increase in F1 score over the co-training approach to Named Entity Classification suggested by Collins and Singer (1999) when applie d to Twitter. [17] proposed a Twitter adapted version of Kanopy called Kanopy4Tweets that uses the approach of interlinking text documents with a knowledge base by using the relations between concepts and their neighbouring graph structure. The system consists of four parts: Name Entity Recogniser (NER), Named Entity Linking (NEL), Named Entity Disambiguation(NED) and Nil Resources Clustering(NRC). The NER of Kanopy4Tweets uses a TwitIE a Twitter information extraction pipeline mentioned above. For the Named Entity Linking. For NEL, a DBpedia index is build using a selection of datasets to search for suitable DBpedia resource candidates for each extracted entity. The datasets are store in a single binary file using HDT RDF format. This format has compact structures due to its binary representation of RDF data. It allows for faster search functionality without the need of decompression. The datasets can be quickly browse and scan through for a specific object, subject or predicate at glance. For e ach named entity found by NER component, a list of resource candidates retrieved from DBpedia can be obtain using the top-down strategy. One of the challenges found is the large volume of found resource candidates impacts negatively on the processing time for disambiguation process. However, this problem can be resolved by reducing the number of candidates using a ranking method. The proposed ranking method ranks the candidates according to the document score assigned by the indexing engine and selects the top-x elements. The NED takes an input of a list of named entities which are candidate DBpedia resources after the previous NEL process. The best candidate resource for each named entity is selected as output. A relatedness score is calculated based on the number of paths between the resources weighted by the exclusivity of the edges of these paths which is applied to candidates with respect to the candidate resources of all other entities. The input named entities are jointly dis ambiguated and linked to the candidate resources with the highest combined relatedness. NRC is a stage whereby if there are no resource in the knowledge base that can be linked to a named entity extracted. Using the Monge-Elkan similarity measure, the first NIL element is assign into a new cluster, then the next element is used to differentiate from the previous ones. An element is added to a cluster when the similarity between an element and the present clusters is above a fixed threshold, the element is added to that particular cluster, whereas a new cluster is formed if there are no current cluster with a similarity above the threshold is found. 2.3. Entity Extraction and Entity Linking [18]proposed a lexicon-based joint Entity Extraction and Entity Linking approach, where n-grams from tweets are mapped to DBpedia entities. A pre-processing stage cleans and classifies the part-of-speech tags, and normalises the initial tweets converting alphabetic, numeric, and symbolic Unicode characters to ASCII equivalents. Tokenisation is performed on non-characters except special characters joining compound words. The resulting list of tokens is fed into a shingle filter to construct token n-grams from the token stream. In the candidate mapping component, a gazetteer is used to map each token that is compiled from DBpedia redirect labels, disambiguation labels and entities labels that is linked to their own DBpedia entities. All labels are lowercase indexed and linked by exact matches only to the list of candidate entities in the form of tokens. The researcher used a method of prioritizing longer tokens than shorter ones to remove possible overlaps of tokens. For each entity ca ndidate, it considers both local and context-related features via a pipeline of analysis scorers. Examples of local features included are string distance between the candidate labels and the n-gram, the origin of the label, its DBpedia type, the candidates link graph popularity, the level of uncertainty of the token, and the surface form that matches best. On the other hand, the relation between a candidate entity and other candidates with a given context is accessed by the context-related features. Examples of mentioned context-related features are direct links to other context candidates in the DBpedia link graph, co-occurrence of other tokens surface forms in the corresponding Wikipedia article of the candidate under consideration, co-references in Wikipedia article, and further graph based feature of the link graph induced by all candidates of the context graph which includes graph distance measurements, connected component analysis, or centrality and density observations. Besid es that, the candidates are sorted per their confidence score based on how an entity describes a mention. If the confidence score is lower than the threshold chosen, a NIL referent is annotated. [19] proposed a lexical based and n-grams features to look up resources in DBpedia. The role of the entity type was assigned by a Conditional Random Forest (CRF) classifier, that is specifically trained using DBpedia related feature (local features), word embedding (contextual features), temporal popularity knowledge of an entity extracted from Wikipedia page view data, string similarity measures to measure the similarity between the title of the entity and the mention (string distance), and linguistic features, with additional pruning stage to increase the precision of Entity Linking. The whole process of the system is split into five stages: pre-processing, mention candidate generation, mention detection and disambiguation (candidate selection), NIL detection and entity mention typing prediction. In the pre-processing stage, tweet tokenisation and part-of-speech tags were used based on ARK Twitter Part-of-Speech Tagger, together with the tweet timestamps extracted from tweet ID. Th e researchers used an in-house mention-entity dictionary of acronyms. This dictionary computes the n-grams (n [20] research paper proposed an entity linking technique to link named entity mentions appearing in Web text with their corresponding entities in a knowledge base. The solution mentioned is by employing a knowledge base. Due to the vast knowledge shared among communities and the development of information extraction techniques, the existence of automated large scale knowledge bases has been ensured. Thus, this rich information about the worlds entities, their relationships, and their semantic classes which are all possibly populated into a knowledge base, the method of relation extraction techniques is vital to obtain those web data that promotes discovery of useful relationships between entities extracted from text and their extracted relation. Once possible way is to map those entities extracted and associated them to a knowledge base before it could be populated into a knowledge base. The goal of entity linking is to map ever textual entity mention m à ¢Ã‹â€ Ã‹â€  M to its corres ponding entry e à ¢Ã‹â€ Ã‹â€  E in the knowledge base. In some cases, when the entity mentioned in text does not have its corresponding entity record in the given knowledge base, a NIL referent is given to indicate a special label of un-linkable. It is mentioned in the paper that named entity recognition and entity linking o be jointly perform for both processes to strengthen one another. A method proposed in this paper is candidate entity generation. The objective of the entity linking system is to filter out irrelevant entities in the knowledge base that for each entity extracted. A list of candidates which might be the possible entities that the extracted entity is referring to is retrieved. The paper suggested three techniques to handle this goal such as name based dictionary techniques entity pages, redirect pages, disambiguation pages, bold phrases from the first paragraphs, and hyperlinks in Wikipedia articles. Another method proposed is the surface form expansion from the local document that consists of heuristics based methods and supervised learning methods, and methods based on search engine. In the context of candidate entity ranking method, five categories of methods are advised. The supervised ranking methods, unsupervised ranking methods, independent ranking methods, collective ranking methods and collaborative ranking methods. Lastly, the research paper mentioned ways to evaluate entity linking systems using precision, recall, F1-measure and accuracy. Despite all these methods used in the three main approaches is proposed to handle entity linking system, the paper clarified that it is still unclear which are the best techniques and systems. This is since different entity linking system react or perform differently according to datasets and domains. [21] proposed a new versatile algorithm based on multiple addictive regression trees called S-MART (Structured Multiple Additive Regression Trees) which emphasized on non-linear tree-based models and structured learning. The framework is a generalized Multiple Addictive Regression Trees (MART) but is adapted for structured learning. This proposed algorithm was tested on entity linking primarily focused on tweet entity linking. The evaluation of the algorithm is based on both IE and IR situations. It is shown that non-linear performs better than linear during IE. However, for the IR setting, the results are similar except for LambdaRank, a neural network based model. The adoption of polynomial kernel further improves the performance of entity linking by non-LINEAR SSVM. The paper proved that entity linking of tweets perform better using tree-based non-linear models rather than the alternative linear and non-linear methods in IE and IR driven evaluations. Based on the experiments condu cted, the S-MART framework outperforms the current up-to-date entity linking systems. 2.4. Entity Linking and Knowledge Base Based on [22], an approach to free text relation extraction was proposed. The system was trained to extract the entities from the text from existing large scale knowledge base in a cooperatively manner. Furthermore, it utilizes the learning of low-dimensional embedding of words, entities and relationships from a knowledge base with regards to score functions. Built upon the norm of employing weakly labelled text mention data but with a modified version which extract triples from the existing knowledge bases. Thus, by generalizing from knowledge base, it can learn the plausibility of new triples (h, r, t); h is the left-hand side entity (or head), the right-hand side entity (or tail) and r the relationship linking them, even though this specific triple does not exist. By using all knowledge base triples rather than training only on (mention, relationship), the precision on relation extraction was proved to be significantly improved. [1] presented a novel system for named entity linking over microblog posts by leveraging the linked nature of DBpedia as knowledge base and using graph centrality scoring as disambiguation methods to overcome polysemy and synonymy problems. The motivation for the authors to create this method is because linked entities tend to appear in the same tweets because tweets are topic specific and together with the assumption since tweets are topic specific, related entities tend to appear in the same tweet. Since the system is tackling noisy tweets acronyms handling and Hashtags in the process of entity linking were integrated. The system was compared with TAGME, a state-of-the-art system for named entity linking designed for short text. The results shown that it outperformed TAGME in Precision, Recall and F1 metrics with 68.3%, 70.8% and 69.5%. [23] presented an automated method to populate a Web-scale probabilistic knowledge base called Knowledge Vault (KV) that uses the combination of extractions from the Web such as text documents (TXT), HTML trees (DOM), Html tables (TBL), and Human Annotated pages (ANO). By using RDF triples (subject, predicate, object) with association to a confidence score that represents the probability that KV believes the triple is correct. In addition, all 4 extractors are merged together to form one system called FUSED-EX by constructing a feature vector for each extracted triple. Next, a binary classifier is applied to compute the formula. The advantages of using this fusion extractor is that it can learn the relative reliabilities of each system as well as creating a model of the reliabilities. The benefits of combining multiple extractors include 7% higher confidence triples and a high AUC score (the higher probability that a classifier will choose a randomly chosen positive instance to be ra nked) of 0.927. To overcome the unreliability of facts extracted from the Web, prior knowledge is used. In the domain of this paper, Freebase is used to fit the existing models. Two ways were proposed in the paper which are Path ranking algorithm with AUC scores of 0.884 and the Neural network model with a AUC score of 0.882. A fusion of both methods stated was conducted to increase performance with an increased AUC score of 0.911. With the evidence of the benefits of fusion quantitatively, the authors of the paper proposed another fusion of the prior methods and the extractors to gain additional performance boost. The result of the fusion is a generation of 271M high confidence facts with 33% new facts that are unavailable in Freebase. [24]proposed TremenRank, a graph based model to tackle the target entity disambiguation challenge, task of identifying target entities of the same domain. The motivation of this system is due to the challenges and unreliability of current methods that relies on knowledge resources, the shortness of the context which a target word occurs, and the large scale of the document collected. To overcome these challenges, first TremenRank was built upon the notion of collectively identity target entities in short texts. This reduces memory storage because the graph is constructed locally and is continuously scale-up linearly as per the number of target entities. This graph was created locally via inverted index technology. There are two types of indexes used: the document-to-word index and the word-to-document index. Next, the collection of documents (the shorts texts) are modelled as a multi-layer directed graph that holds various trust scores via propagation. This trust score provided an in dication of the possibility of a true mention in a short text. A series of experiments was conducted on TremenRank and the model is more superior than the current advanced methods with a difference of 24.8% increase in accuracy and 15.2% increase in F1. [25]introduced a probabilistic fusion system called SIGMAKB that integrates strong, high precision knowledge base and weaker, and nosier knowledge bases into a single monolithic knowledge base. The system uses the Consensus Maximization Fusion algorithm to validate, aggregate, and ensemble knowledge extracted from web-scale knowledge bases such as YAGO and NELL and 69 Knowledge Base Population. The algorithm combines multiple supervised classifiers (high-quality and clean KBs), motivated by distant supervision and unsupervised classifiers (noisy KBs) Using this algorithm, a probabilistic interpretation of the results from complementary and conflicting data values can be shown in a singular response to its user. Thus, using a consensus maximization component, the supervised and unsupervised data collected from the method stated above produces a final combined probability for each triple. The standardization of string named entities and alignment of different ontologies is done in the pre-processing stage. Project plan Semester 1 Task Start End Duration(days) Milestone Research: 23/03/2017 Twitter Call 27/02/2017 02/03/2017 4 Entity Recognition 27/02/2017 02/03/2017 4 Entity Extraction 02/03/2017 02/03/2017 7 Entity Linking 09/03/2017 16/03/2017 7 Knowledge Base Fusion 16/03/2017 23/03/2017 7 Proposal 27/02/2017 30/03/2017 30 30/03/2017 Crawling Twitter data using Public Stream API 31/03/2017 15/04/2017 15 15/04/2017 Collect Twitter data for training purp

Friday, October 25, 2019

Tolstoys Anna Karenina Essay -- Tolstoys Anna Karenina

Tolstoy's Anna Karenina The world of Tolstoy's Anna Karenina is a world ruled by chance. From the very opening chapters, where a watchman is accidentally run over by a train at Moscow's Petersburg station, to the final, climactic scenes of arbitrary destruction when Levin searches for Kitty in a forest beset by lightning, characters are brought together and forced into action against their will by coincidence and, sometimes, misfortune. That Anna and Vronsky ever meet and begin the fateful affair that becomes the centerpiece of the novel is itself a consequence of a long chain of unrelated events: culminating Anna's sharing a berth with Vronsky's mother on her way to reconcile Dolly and Stiva in Moscow. And yet, as an epigraph to this seemingly chaotic world of chance event, a seemingly amoral world that would seem to neither punish sin nor reward good, Tolstoy chooses a quotation that comes originally from the book of Deuteronomy's song of Moses: "Vengeance is mine; I will repay." Originally (and s omewhat narrowly) thought to refer to Anna's final ostracism from the upper echelons of society that punish her for her misdeeds, the epigraph is the key to Tolstoy's subtle and philosophically complex conception of morality that denies the existence of a universal and unavoidable justice and derives responsibility from the individual's freedom to create and then bind himself to laws. Three of the novel's characters, Stephen Oblonsky, Constatine Levin, and Anna Karenina, all in some way connected to the Shcherbatsky family, serve to illustrate the various ways that Tolstoy's individual can be, or fail to be, "good," the various ways in which a character can be moral, immoral or amoral through the use of thought, or reason, to create necessity outside of the confused demands of a chaotic reality. Tolstoy's world is indeed a servant to chance, and the plot depends so heavily on coincidence that Anna Karenina, taking into account the many elements of Menippian satire and Socratic dialogue that are integrated into its structure, may well be considered in part a carnival novel. The steeplechase scene during which Vronsky breaks Frou-Frou's back is a perfect example of carnivalism -- the tragic yet somehow slapstick and cartoon-like injuries that befall the riders is a parody of the grand battlefield that the steeplechase is supposed to symbolize and the ... ...els." Anna is immovable in the face of the purely pleasurable and uninterpreted aspects of life -- "girlish delights" -- that are Oblonsky's daily bread. Anna is thus a tragic hero in the strict Aristotelian sense of being destroyed by the logical evolution of her personality. Yet it is also true that Tolstoy resists the tragic form in the overall structure of his novel by continuing into Part VIII and into Levin's life after Anna's death. While Anna fails to sustain a life centered in "romantic morality," the Goethian ideal of complete devotion, not to the loved one, but the condition of being in reciprocal love itself, Levin finds, at the end of the novel, a way to live that transcends the demands of reality. In the folk culture of the peasants that he encountered near the very beginning of the novel, he finds the peasant Theodore who understands Levin's need to leave the mundane, to live not for his belly, but for "Truth," a goodness that is beyond the chain of cause and effect that so binds the other characters in the novel -- Dolly, for example, who, unable to apply reason outside of pragmatic thought to her life, continues to l ive, pathetically, with her unfaithful husband.

Thursday, October 24, 2019

Frequent Shopper Program

Kudler Fine Foods (KFF) is a local upscale specialty food store that is committed to providing customers with the finest selection of specialty foods. In addition, KFF would like to reward their customers for their loyalty by incorporating a frequent shopper program. KFF is planning on developing a system that tracks customer purchases and awards loyalty points for redemption. The system will assist KFF in satisfying their most valued customers.Smith Systems Consulting Firm has been contracted for the development of the system. Smith Systems Consulting has been serving clients since 1994 with high value web and business application services. In this proposal, Smith Systems consulting will propose two alternative methods for completing the frequent shopper application. The advantages and disadvantages for each method and how the firm would conduct testing for each development method will be discussed.Regardless of which method is used, most software process models, follow a similar se t of phases and activities. The difference between models is the order and frequency of the phases. The specific parts of the software process are presented below: 1. Inception – Software product is created and defined.2. Planning – Resources, schedule, and cost are determined. 3. Requirements Analysis – Specify what the application must do. 4. Design – Specify the parts and how they fit 5. Implementation – Write the code 6. Testing – Execute the application with input test data 7. Maintenance – Repair defects and add capability (TechTarget, 2014)The first model that will be proposed is the â€Å"waterfall† process. The â€Å"waterfall† process is the oldest software process model and despite its weaknesses, it is still in widespread use today. The waterfall process requires following the phases in a sequential order where the output for one phase is used as the input for the next. The next phase in the process is not st arted until the previous one has been completed, although a small overlap between phases is accepted. Two advantages and disadvantages of using this model relative to the frequent shopper program will be discussed.The first advantage is the practicality of the process. We have been using this process for many years and have a great deal of experience with it. All individuals involved have and understanding of the process and its execution. The second advantage is the process is simple and easy to use. The criteria of each phase are set and completed sequentially. The order of execution is easy for everyone to comprehend. There is no question on what needs to be completed before the next phase can begin.The first disadvantage is that requirements need to be known up front. KFF currently has a broad range of requirements and every detail is not known. As the project progresses, more details may become known; which could cause the project to be stopped and re-imagined. The second disad vantage is that there is no feedback of the system by stakeholders until after the testing phase. KFF has no way of knowing if the program meets their requirements because the â€Å"waterfall† process does not facilitate intermediate versions.The second method that will be proposed is the agile methodology. The agile methodology proposes alternatives to traditional project management. Agile development focuses on keeping code simple, testing often, and delivering functional bits of the application as soon as they are ready (TechTarget, 2014). One goal of agile development is to build upon small-client-approved parts as the project progresses, as opposed to delivering one large application at the end of the project.One advantage to using agile methodology for the frequent shopper application is the ability to respond to changing requirements. KFF may decide to change the requirements of the project, which can easily be handled using the flexibility of the agile methodology. A second advantage is the face-to-face communication and continues input from customer representatives making sure that there is no guesswork (Buzzle, 2013). The result is exactly what the customer has required.The first disadvantage of the agile methodology is the possibility that the project can be taken off track. KFF is not one hundred percent clear on the final outcome that they want; therefore, the project has the potential to get off track because requirements are constantly changing. Another disadvantage is that it is difficult to assess the effort needed to complete this project at the beginning of the software development life cycle. Since KFF is not specific on the requirements for the project, we cannot plan how much time or the amount of resources we will need to complete the project.Regardless of the method that is used for the frequent shopper application, testing is a necessary component of the process. Testing is conducted differently depending on which software model is used. Since the waterfall method follows a sequential approach, the testing is done so also. The flexibility of the agile method also allows flexibility for the testing process.Using the waterfall method testing would begin during the implementation stage. The work would be divided into modules and the coding would begin after receiving the system design documents. The frequent shopper program would be developed into small programs called units. As an example, there would be a program that handles the input from the customer and another program that would track the employee’s reward points. Each unit is developed and then tested for functionality. Unit testing verifies if the units meet the specifications.The units are then integrated into a complete system during the integration phase and tested to see if all units coordinate between each other and the system functions as a whole per the specification (Onestoptesting, 2014). After testing of the frequent shopper program is successful, the software is delivered to the customer. If problems are found after deployment they are solved immediately. This is referred to be maintenance and sometimes that process is virtually never ending.Agile testing focuses on testing being an integral part of software development rather than a separate phase. (â€Å"Agile Testing†, n.d.) Testing from the beginning of the project and continually testing throughout the project lifecycle is the foundation in which agile testing is built. Agile testing is software testing based on the principles of agile software development.The combined team, including the testing team will take responsibility of analyzing the business requirements of the frequent shopper program. Together the team will define a sprint goal. The testing team will then begin work on the test plan that is validated by the entire team and KFF. As the development team starts the implementation, the test team will begin working on the test case design.Wh en the code is ready to test, the test team will do a quick test on the development environment, in order to identify the early stage defects. Developers will fix the defects on a priority basis. This iteration will continue until the end of the code implementation. In addition after approval from KFF, automated test cases will be run on a daily basis. Because of the frequency of testing using the agile method, automated tests are needed.Smith Systems Consulting needs to choose the methodology that works for them and the client. Since each project is unique, there is not a one-size fit all methodology. Two alternative methods for completing this project were presented and Smith Systems Consulting can make a decision on which to choose.

Wednesday, October 23, 2019

Psychology Post Labor Day

This Sync drone results in anxiety, lack of motivation, difficulty concentrating and a feeling of e emptiness during the first few weeks of returning to work. There is a lot of factual inform action that is used to back up this idea. Most of the research is professionally studied and scientifically proved. However, some evidence is stronger than others. This article provides a lot of evidence. This article is mostly scientific because almost all of the information comes from professional doctors, journals, or SST dies.For example, a study of 96 Dutch workers â€Å"found that health and wellbeing return De to pre vacation levels during the first week back at work. † This was conducted by pro fissionable and published in the journal Work and Stress. The article also states information on and then tells the readers where that information came from, weather it was a jog renal or a person. For example, after the article explains to be alert for symptoms of Pos t Vacation Syndrome, it refers to its source, â€Å"†¦ Says Katherine Mueller, assistant director o f the Center for Integrative Psychotherapy in Allentown, P. † The article gives a thou rough description of who the person is. Her occupation/ position and her location. Some evidence is slightly stronger scientifically than others because of its accuracy. Not all the people who are mentioned in this article are 100% correct but they are not war Eng either. Some evidence gives a more accurate description than others. For example, † An estimated 6% of the U. S. Population suffers from SAD† (Seasonal Affective Dies order) This piece of evidence gives a numerical value which makes it more accurate than the statements above.Most evidence in this article is factual however there are some opinions. Facts are pieces Of information that are scientifically proven while opinions are the way someone thinks and their viewpoint on a topic. For example, Emily Clicking has an opinion on children' s' and adults' mindsets on going back to school. ‘†General Y, kids can't wait to go back to school. For parents, that means months of purchasing , planning, nagging, chauffeuring, chaperoning and negotiating. † This is an opinion because SE it is not true for all children and parents to view going back to school that way.That SST atonement reflects more on Clinician's point of view than on scientifically proven facts. If Clicking would have mentioned a percentage of how many kids are excited to go back to school and how many parents are not excited for the school year to begin, it would b e a more reliable source. Findings in this article are trustworthy because it uses a lot of sources such as different people and different studies in different journals. This create s an unbiased argument.