100% (1)
Pages:
13 pages/≈3575 words
Sources:
13
Style:
APA
Subject:
Literature & Language
Type:
Essay
Language:
English (U.S.)
Document:
MS Word
Date:
Total cost:
$ 46.8
Topic:

Business Language Test: Reading and Writing

Essay Instructions:

The topic is about designing a business English test to test business related language abilities to those candidates who want to become a member of the company. All the candidates already have IELTS 6 or CEFR B2 level before entering into the program. The test only has two sessions each year. One in the spring one in the winter. If candidates successfully passed the prevocational test, they will be formally and successfully admitted and they will become a member of the export company.

Based on the information provided above, please write an essay following the structure carefully listed as follows:
Present and give a rationale for a new language assessment you have designed, including some sample test materials for that assessment. Your presentation and rationale should be 3,000-3,500 words. The sample test materials should represent the equivalent of around 500-1,000 words, so that the whole assignment is the equivalent of around 4,000 words.

Notes
• This document is a “draft template”. It’s presented in Week 5 of the course so that learners know what’s expected for the assignment. The “final template” will be presented toward the end of the course and all learners notified. The final template may have minor additions/amendments, compared to this draft template.
• Before writing your assignment, make sure you have consulted the information on Moodle in the sections ‘General course information’ and ‘Resource list’.
• The phrase “equivalent of” above means that your whole assignment should represent a similar amount of work to that which goes into a normal 4,000-word assignment. For example, if you write 3,500 words for the presentation/rationale of your assessment, the sample test materials you present should then represent around the same amount of work as used to write 500 words in a normal assignment. Note that sample test materials representing the equivalent of 500 words of work may themselves contain far FEWER than 500 words (e.g. a sample SR item may only contain 20 words but have taken a very long time to create), or far MORE than 500 words (e.g. a sample reading text may have been only minimally adapted from an online news story).
• While you have worked on test design in groups during the semester, as usual it is required that your writing should NOT include any text which is the same as text in other students’ assignments, including the assgnments of students in your group. This requirement applies BOTH to the written presentation/rationale AND to the sample test materials.
• If you wish to present sample test materials which are designed to be in a non-English language (e.g. pre-assessment instructions written in Chinese), please also present a translation into English.
• It isexpected that the sample test materials are generally integrated into the writing of the assignment. (So it is expected that you do NOT insert all the sample test materials into an Appendix.)
• Before submission, I suggest you check your writing in Microsoft Word (for obvious spelling, grammar, and punctuation errors) and Academic Marking Mate (for obvious problems with academic style).
• Your assignment will be scored using the same “Marking Criteria Matrix – for Postgraduate Taught Courses” document used for scoring most MA courses in the School of Education and English.

Word count NOT including references, NOT including words contained within sample test materials

[The instructions in red on this template are provided to help guide you through writing the assignment. DELETE ALL THE INSTRUCTIONS IN RED BEFORE SUBMITTING YOUR ASSIGNMENT.]
Purpose of the assessment [Begin with this section. No Introduction is needed in this assignment.]
[This section should include the construct, features of the expected test-taker population, decisions which will be made using the scores, and any other relevant features.]
[As discussed in the course, your assessment must test AT LEAST TWO of the traditional skills (listening, reading, writing, speaking). Your assessment must also test AT LEAST ONE RECEPTIVE SKILL (listening and/or reading) and AT LEAST ONE PRODUCTIVE SKILL (speaking and/or writing). Integrated items are acceptable.]
[This assignment is primarily practical and applied; but validity is one theoretical concept which everyone should define, since it is so important but often misinterpreted. So when describing the purpose of your assessment, GIVE A DEFINITION OF VALIDITY.]

Assessment process [Present the design of the assessment, and also give a rationale for the design, based on the three major evaluative criteria for assessments. I imagine that combining presentation and rationale will be most efficient. But if you find it more intuitive to first present the assessment design in each section, and then give a rationale for it, feel free to do this. The sub-headings used below are mostly taken from Mislevy’s ECD framework, which has been used throughout the course, and is explained further at Almond, Steinberg and Mislevy (2002).]
• Pre-assessment information communicated to test-takers and other relevant stakeholders [e.g. what information is communicated about the assessment before the assessment takes place, who is it communicated to, when is it communicated, what formal teaching if any takes place that is relevant to the assessment, …]
• Assessment presentation materials [e.g. reading/listening texts; items, including both selected-response and constructed-response; instructions given to test-takers; how have texts and items been developed or selected; ….]
[You must include AT LEAST TWO selected-response items in your sample test materials, which have been created by yourself. You must include AT LEAST ONE excerpt from a reading or listening text.]
• Assessment presentation process [e.g. use of technology; features of physical environment; number of times a listening text is played; sequencing of texts and items; times allowed for texts and items; location of test; ….
• Expected test-taker work products [e.g. what writing, speaking, or other responses will the test-taker produce; how long will they be; …]
• Evidence rules [e.g. what documents will be used to score the test-taker work products; how have those documents been developed or selected; …]
• Response processing [e.g. who will score the test-takers’ work product; what training or guidance will they be given; what efforts will be made to assure their reliability; how much time will they have; …]
• Summary scoring process [e.g. how will the response processing from the different items be combined to determine an overall score/overall scores for the assessment; …]
• Summary feedback [e.g. what score or scores will be reported; what other feedback is reported; who will it be reported to; …]

Conclusion [Probably just 1 paragraph]
References [It is expected that most of your references come from:
• the sources on the ‘Resource list’ on Moodle
• sources which are cited by sources on the resource list on Moodle
• sources which can be found in www(dot)scopus(dot)com or www(dot)webofscience(dot)com Web of Science Core Collection

It is also expected that you cite at least one set of external standards, such as CEFR or China’s Standards of English Language Ability (CSE).

You’re a Masters student, so of course please use your judgment when identifying other sources which you think are relevant and of a high quality. You should NOT cite random journal articles, books, and student dissertations/theses which you happen to find through searches on Google Scholar, Google, Baidu, etc.]
• [Format your references in a consistent style. If you’re unsure how to cite a particular kind of sources, check https://apastyle(dot)apa(dot)org/style-grammar-guidelines/references/examples.]
• The class audio recordings and the PowerPoint will be uploaded in the following website, and you need to download them.
The quizzes questions and answers will also be uploaded to the website.

Please listen to the audio lecture five + Seminar five, particularly in 44mintes 32seconds first, to get an overall idea of how this assignment is going to write

pay attention to the fact that Knock (2021) ‘Assessing writing’, which is pages 236-248 in Fulcher & Harding (2021) Routledge Handbook of Language testin

The LTA draft assignment template that you must use throughout your writing your final paper to guide your writing. The marking criteria is the document that you need to consider how to do a distinction level final essay and follow the criteria for distinction. The third document is the most essential screenshots of the most important information in this assignment, and thus you must take a look at them carefully. Lastly, I tried my best to collect the key readings as much as I can. Almost all the key readings have already been included in the last zip file for your reference. Please, depending on the construct of the test, try your best to cite and put in the references list as much as you can using the materials in the last zip file.

The files sent in this email contain quizzes, readings for quizzes, powerpoints for classes as well as the course recordings for the first six weeks except week three. The powerpoints are actually situated in the file called:"LTA FIRST SIX WEEKS READING TO ACCOMPANY QUIZZES"

Essay Sample Content Preview:

Business Language Test
Student Full Name
Institutional Affiliation
Course Full Title
Instructor Full Name
Due Date
Business Language Test
Purpose of the assessment
This business language test will assess two of the traditional skills: reading and writing. It will focus on reading as a receptive skill and writing as a productive skill. The knowledge, skills, capacities, and other features that will form the focus of the reading assessment section include learners’ knowledge of reading, vocabulary, and grammar. On the other hand, the writing assessment section will concentrate on evaluating students’ content communicative achievement, organization, and language proficiencies. It is crucial that second language learners possess the ability to write given the importance of the skill across social, educational, and workplace settings (Marcoulides & Ing, 2013). However, given the variability of second language learners, English language assessment must be able to assess the degree to which all students’ English is fit for real-world purpose (Jenkins & Leung, 2013). The test-taker population comprises of a group of candidates who wish to become a member of an export company: prospective applicants are supposed to pass a prevocational business English test that evaluates their business-related language abilities in order to be formally and successfully admitted. All the candidates already have IELTS or CEFR B2 levels before entering into the program. The students sitting for the test have an effective command of the English language and can employ as well as comprehend fairly sophisticated language in familiar situations even though they are prone to several inappropriate or inaccurate usage and misunderstandings.
They can communicate with no trouble and naturally in a clear and detailed manner even if they are still not experienced speakers. The reading and writing business language test is therefore a proficiency test and professional qualification tool for employment potential. One theoretical concept that will be especially important in the business language test is validity. Validity is a key conceptual factor in classroom-based evaluation from the perspective of language teacher knowledge and practice (Leung, 2013). This assessment defines validity as the degree to which this test measures the constructs it is designed to measure (Fulcher & Harding, 2021, p. 22). In this case, the validity of the business language test is the extent to which it measures learners’ knowledge of reading, vocabulary, and grammar, over and above, their content communicative achievement, organization, and language proficiencies.
Assessment process
The three major evaluative criteria for assessments are reliability, construct validity, and authenticity. Reliability refers to the consistency of measurement across various features of a testing situation like different raters and different prompts. The test must give consistent results across different prompts or raters in order to ensure that the inferences derived from the test results are suitable and fair. However, reliability is more than just ensuring consistency and also includes ensuring that the test captures the ability the examiner wants to test. The reliability of a test is dependent on various factors, including variables tied to the writing assignment itself as well as the scoring process. The second evaluative criteria is construct validity, which denotes the meaningfulness and appropriateness of the inferences made by an examiner about test-takers’ ability based on test scores. In order to come up with accurate interpretations about the ability of students taking the test, it is important to ascertain the extent to which the test actually assesses that particular ability. Construct validity depends on how the ability of interest is defined and the specific testing context (Chapelle, 2009, p. 102). The third evaluative criteria is authenticity, which refers to the level of similarity between the features of a particular language test assignment and the characteristics of a target language use. The test task must conform with the type of skill that test-takers will require in the real world after the test.
The reading assessment section of the business language test will entail a cloze test and comprehension test. Cloze tests, in which learners are required to fill in the missing words or details of a lengthy text, are widely regarded by reading specialists as a proficient reading assessment tool for several reasons. The proficiency movement in America has been advocating for language tests that evaluate productive skills critical in real-world communication. Cloze tests gained popularity in the 1970s as an alternative format for integrative tests, particularly because they necessitate context-driven and value-bound strategies in developing items and interpreting required responses (Fulcher & Harding, 2021, p. 239). They were also lauded for the authentic manner in which they tested for vocabulary and grammar. Besides, the cloze test method assesses these abilities in the manner in which they are employed in the actual reading process.
One of the most prominent basis for the prominence of integrative tests in proficiency tests is their ability to provide a method for evaluating the student’s ability to employ a variety of reading approaches as context clues, over and above, comprehend the syntactic location of words in sentences so as to overcome the challenges in understanding the author’s meaning (Koda, 2007). Another rationale for using a cloze test in the business language test is that it is a viable supplement to the traditional comprehension test since it not only evaluates a learner’s comprehension of the message contained within the given text, but it also assesses the student’s capacity to integrate connotations across sentences in order to arrive at a more complete understanding of the general message found in the text. Test items are dependent on the passage and the learner is only able to respond fittingly to an item by first trying to understand the passage. However, critics have challenged the use of cloze tests in language assessment tests citing their lack of sensitivity to intersentential impediments and claiming that they often focus on lower-order skills. Dissidents also argue that cloze tests cannot be a valid measure of reading proficiency or text comprehension because they fail to evaluate discourse level representations.
However, many of these opposing arguments are disproved by the fact that cloze challenges test the learner’s ability to process information across sentences. A majority of cloze items are cohesive items and filling in the gaps involves understanding the broader meaning beyond sentence boundaries. Besides, there have been growing calls for a move away from “choice formats” towards “search for authentic items”. Cloze tests incorporate the real world, since many are sourced from authentic texts, and therefore test practical language skills: they are integrative, authentic, communicative, and practical for testing real-world language proficiency. Most of the items tested in cloze tests require knowledge of real language use including vocabulary, language structure, phonemic reading, auditory discrimination, and general-use lexicon (Fulcher & Harding, 2021, p. 251). Because the test-takers are seeking employment in an export company, using an assessment tool that evaluates their ability to transfer reading comprehension to real language use is key.
Cloze tests mirror, to a great extent, the levels of language proficiency that will be required by the employing firm, a situation that examinees wish to find themselves in after being selected on the basis of test results. Choice formats and discrete point testing have been widely criticized for concentrating on learners’ understanding of the linguistic system as opposed to cloze tests, which comprise integrative test items. The prevailing contention that language ought to be evaluated as a whole, serves to reinforce the usefulness of cloze tests as a reading assessment tool (Fulcher & Harding, 2021, p. 250). In addition to their integrative characteristic, cloze tests provide other unique advantages over other reading assessments. Cloze challenges can be employed on a broad variety of texts and are relatively simple to construct. They also overcome the challenge of asking sensible questions about lengthy texts by distributing critical items across the entire passage. Moreover, cloze tests are more resilient to experimenter bias when compared to other assessment methods: for instance, traditional comprehension exercises are susceptible to text manipulations, a drawback that is relatively uncommon in cloze tests.
The traditional comprehension test, despite its shortcomings, is another useful way of assessing learners’ reading and comprehension proficiencies. Comprehension tests of lengthy discourses have long been used to assess students’ inferencing and strategic processing skills in aiding the building of an intelligible mental model for a text. However, a majority of traditional tend to use a multiple-choice format where both the text and questions are simultaneously accessible to learners. While there are several advantages to this evaluation strategy, numerous researchers have pointed out the inability of multiple-choice formats to assess the products and processes outlined by discourse theory and which are essential for determining deep comprehension (Chapelle, 2009). The challenge is to devise items and tests that are not easy to respond to by test-wiseness strategies geared at sidestepping informed means of producing responses (Cohen, 2013). It is not uncommon for students to skip the passage and dive into the questions first, searching for the correct answers in the text. Such language tests are particularly susceptive to guessing thereby compromising test security (Ellis & Ross, 2013). Besides, multiple-choice formats do not provide sufficient opportunities for evaluation of certain aspects of language such as vocabulary.
For instance, reading comprehension test with choice formats tend to focus on inferencing skills while neglecting vocabulary: it is easy for learners to inference the correct answer without verifying word meanings. Rather than using the multiple-choice format, the comprehension test will focus on textually explicit and implicit open-ended questions: both direct and indirect questions will be used. Textually explicit questions are those whose answers can be found in one sentence while textually implicit questions require the learner to process information across sentence boundaries (Alderson, 2000, p. 87). The comprehension test will focus on the latter since the aim of the exercise is to evaluate learners’ ability to make inferences about the text by receiving and interpreting information encoded across sentence borders. The purpose of the test is to determine how well students can decode and interpret through the rhetorical structure of the text, vocabulary items, and grammatical points to build meaning.
Indirect questions are especially useful in evaluating comprehension processes, particularly the student’s syntactic knowledge, genre knowledge, lexical knowledge, morphological knowledge, and general world knowledge. The comprehension test will aim to evaluate learners’ knowledge of: functions of articles, verbs, modals, and nouns; word structures such as prefixes, morphemes, and suffixes; types of texts and genres; and reading topic (both background knowledge and sociocultural knowledge) (Alderson, 2000, p. 89-93). Besides, direct and indirect textually implicit questions also test for higher order skills and the learner can only arrive at the correct answer correct interpretation and evaluation of the text. On the whole, the reading assessment approach will be integrative and will not just focus on students’ ability to comprehend the important features or meanings of the text but also vocabulary and grammar facets that tend to facilitate or impede this comprehension. Since the ultimate goal of reading is understanding the meaning conveyed by the text, testing cognitive processes tied to underlined language competencies is central to ascertaining that students can decode words, connect sentence meanings, and apply general world knowledge to achieve appropriate text comprehension in the real world.
The writing assessment section of the business language test will entail a persuasive discourse in the form of an email to a company executive and an expository discourse in the form of an informative article. This second part of the business language test will evaluate ability of learners to communicate ideas clearly, organize content in a smooth and linear manner, and also demonstrate language proficiencies (Weigle, 2002, p. 79). This part of the language business test will focus on assessing students’ mastery of grammar points as well as the technical features of writing. For instance, the persuasive discourse will examine test-takers’ linguistic, textual, vocabulary, and language knowledge as well as how sentences are organized into texts. More importantly, the email writing task will evaluate test-takers’ use of language to achieve target communicative functions including. Similarly, the expository discourse will not only test linguistic, textual, vocabulary, and language knowledge, but also ability to use language to describe a subject clearly and factually.
It is critical to students’ capacity to vary language use in dissimilar settings: whether students can write essays that meet a particular function, such as to persuade and inform, and whether they can apply the language that fulfills the expectations of formality of a professional and academic audience (Purpura, 2013, p. 38-39). The email writing exercise will evaluate students’ understanding of basic email processes, best practices in professional email communication, and proper email etiquette in a business setting. On the other hand, the informative article writing will assess students’ capacity to explore an issue of importance and apply article writing conventions such as writing factually, following the organizational structure of informative articles, and conveying ideas in a clear and interesting manner. Both the email and informative article writing tasks are suited to real world applications. For instance, evaluating a test-taker’s ability to communicate persuasively and effectively by professional email is essential given the influence of email writing on one’s career path. At the same time, assessing a test-taker’s ability to write an expository article is integral in the business world given the ubiquity of reports.
* Pre-assessment information communicated to test-takers and other relevant stakeholders
The objective of pre-assessment information will be to communicate test expectations and focus test-takers’ attention on assessment targets. Formal teaching of models of excellent performance for the cloze test, comprehension test, as well as email and information article writing will be conducted a few weeks prior to the business language test. In preparation for the cloze test, students will be taught strategy use and success in cloze challenges including how to recognize parallelism across phrases as well as how to process information within and across sentence boundaries using cues in previous and subsequent sentences. As relates to answering comprehension questions, test-takers will learn how to identify the main ideas communicated in a passage, the various kinds of reading questions, how to apply inferencing to locate ideas not stated directly, as well as ways of activating prior experiences and knowledge to answer questions whose answers are not located in the text.
Pre-assessment inf...
Updated on
Get the Whole Paper!
Not exactly what you need?
Do you need a custom essay? Order right now:
Sign In
Not register? Register Now!