Questões de Concurso Para selecon

Foram encontradas 17.330 questões

Resolva questões gratuitamente!

Junte-se a mais de 4 milhões de concurseiros!

Q2471861 Matemática
Em um cone reto, o raio da base, a altura e a geratriz, nessa ordem, formam uma progressão aritmética. Logo, a planificação da superfície lateral desse cone é um setor circular, cujo ângulo central, em radianos, é igual a:
Alternativas
Q2471860 Matemática
Aurélio tem que ler três relatórios diferentes em cinco dias, obedecendo às seguintes condições:

• Por dia, pode ser lido, no máximo, um relatório.
• Esses relatórios não podem ser lidos em três dias consecutivos.

O número máximo de maneiras diferentes de Aurélio fazer a leitura desses relatórios é:
Alternativas
Q2471859 Matemática
Certo dia, Maurício afirmou: "Hoje, a probabilidade de eu ir ao teatro é de 70% e de eu ir à praia, é de 20%". Se realizar uma dessas ações independentemente de realizar a outra, a probabilidade de, nesse dia, Maurício ir ao teatro ou ir à praia é de:
Alternativas
Q2471858 Matemática
O conjunto solução da inequação (m+1) x² + m < 2 (m – 1) x é o conjunto vazio se:
Alternativas
Q2471857 Matemática
Um número natural X é igual à diferença entre um número quadrado perfeito e 1.960. A soma dos algarismos do menor valor de X é:
Alternativas
Q2471856 Matemática
Se √13 = a, a fração Imagem associada para resolução da questão é igual a:
Alternativas
Q2471855 Matemática

Considere a função real f(x), definida como sendo o determinante da matriz A, dada a seguir: 


Imagem associada para resolução da questão


Se o período da função f é 6π, o valor positivo de k e o conjunto imagem da função f são, respectivamente, iguais a:

Alternativas
Q2471854 Matemática
Irineu aplicou certa quantia a juros simples, durante 2 anos, à taxa de 10% ao ano. O montante dessa aplicação foi aplicado, também a juros simples, durante 5 trimestres, à taxa de 0,8% ao mês. Se o montante dessa segunda aplicação é de 4.704 reais, o capital aplicado inicialmente, em reais, correspondeu a:
Alternativas
Q2471843 Inglês

What is Validity?
by Evelina Galaczi
July 17th, 2020


The fundamental concept to keep in mind when creating any assessment is validity. Validity refers to whether a test measures what it aims to measure. For example, a valid driving test should include a practical driving component and not just a theoretical test of the rules of driving. A valid language test for university entry, for example, should include tasks that are representative of at least some aspects of what actually happens in university settings, such as listening to lectures, giving presentations, engaging in tutorials, writing essays, and reading texts.

Validity has different elements, which we are now going to look at in turn.

Test Purpose – Why am I testing?

We can never really say that a test is valid or not valid. Instead, we can say that a test is valid for a particular purpose. There are several reasons why you might want to test your students. You could be trying to check their learning at the end of a unit, or trying to understand what they know and don't know. Or, you might want to use a test to place learners into groups based on their ability, or to provide test takers with a certificate of language proficiency. Each of these different reasons for testing represents a different test purpose.

The purpose of the test determines the type of test you're going to produce, which in turn affects the kinds of tasks you're going to choose, the number of test items, the length of the test, and so on. For example, a test certifying that doctors can practise in an English-speaking country would be different from a placement test which aims to place those doctors into language courses.

Test Takers – Who am I testing?

It’s also vital to keep in mind who is taking your test. Is it primary school children or teenagers or adults? Or is it airline pilots or doctors or engineers? This is an important question because the test has to be appropriate for the test takers it is aimed for. If your test takers are primary school children, for instance, you might want to give them more interactive tasks or games to test their language ability. If you are testing listening skills, for example, you might want to use role plays for doctors, but lectures or monologues with university students.

Test Construct – What am I testing?

Another key point is to consider what you want to test. Before designing a test, you need to identify the ability or skill that the test is designed to measure – in technical terms, the ‘test construct’. Some examples of constructs are: intelligence, personality, anxiety, English language ability, pronunciation. To take language assessment as an example, the test construct could be communicative language ability, or speaking ability, or perhaps even a construct as specific as pronunciation. The challenge is to define the construct and find ways to elicit it and measure it; for example, if we are testing the construct of fluency, we might consider features such as rate of speech, number of pauses/ hesitations and the extent to which any pauses/hesitations cause strain for a listener.


Test Tasks – How am I testing?

Once you’ve defined what you want to test, you need to decide how you’re going to test it. The focus here is on selecting the right test tasks for the ability (i.e. construct) you're interested in testing. All task types have advantages and limitations and so it’s important to use a range of tasks in order to minimize their individual limitations and optimize the measurement of the ability you’re interested in. The tasks in a test are like a menu of options that are available to choose from, and you must be sure to choose the right task or the right range of tasks for the ability you're trying to measure. 

Test Reliability - How am I scoring?

Next it’s important to consider how to score your test. A test needs to be reliable and to produce accurate scores. So, you’ll need to make sure that the scores from a test reflect a learner's actual ability. In deciding how to score a test, you’ll need to consider whether the answers are going to be scored as correct or incorrect (this might be the case for multiple–choice tasks, for example) or whether you might use a range of marks and give partial credit, as for example, in reading or listening comprehension questions. In speaking and writing, you’ll also have to decide what criteria to use (for example, grammar, vocabulary, pronunciation, essay, organisation in writing, and so on). You’ll also need to make sure that the teachers involved in speaking or writing assessment have received some training, so that they are marking to (more or less) the same standard.

Test Impact - How will my test help learners?

The final – and in many ways most important – question to ask yourself is how the test is benefitting learners. Good tests engage learners in situations similar to ones that they might face outside the classroom (i.e. authentic tasks), or which provide useful feedback or help their language development by focusing on all four skills (reading, listening, writing, speaking). For example, if a test has a speaking component, this will encourage speaking practice in the classroom. And if that speaking test includes both language production (e.g. describe a picture) and interaction (e.g. discuss a topic with another student), then preparing for the test encourages the use of a wide range of speaking activities in the classroom and enhances learning.

Adapted from: https://www.cambridgeenglish.org/blog/what-is-validity. Acesso em: 15 dez. 2023.

Dentre as palavras abaixo, todas retiradas do texto, aquela que é considerada um falso cognato é: 
Alternativas
Q2471842 Inglês

What is Validity?
by Evelina Galaczi
July 17th, 2020


The fundamental concept to keep in mind when creating any assessment is validity. Validity refers to whether a test measures what it aims to measure. For example, a valid driving test should include a practical driving component and not just a theoretical test of the rules of driving. A valid language test for university entry, for example, should include tasks that are representative of at least some aspects of what actually happens in university settings, such as listening to lectures, giving presentations, engaging in tutorials, writing essays, and reading texts.

Validity has different elements, which we are now going to look at in turn.

Test Purpose – Why am I testing?

We can never really say that a test is valid or not valid. Instead, we can say that a test is valid for a particular purpose. There are several reasons why you might want to test your students. You could be trying to check their learning at the end of a unit, or trying to understand what they know and don't know. Or, you might want to use a test to place learners into groups based on their ability, or to provide test takers with a certificate of language proficiency. Each of these different reasons for testing represents a different test purpose.

The purpose of the test determines the type of test you're going to produce, which in turn affects the kinds of tasks you're going to choose, the number of test items, the length of the test, and so on. For example, a test certifying that doctors can practise in an English-speaking country would be different from a placement test which aims to place those doctors into language courses.

Test Takers – Who am I testing?

It’s also vital to keep in mind who is taking your test. Is it primary school children or teenagers or adults? Or is it airline pilots or doctors or engineers? This is an important question because the test has to be appropriate for the test takers it is aimed for. If your test takers are primary school children, for instance, you might want to give them more interactive tasks or games to test their language ability. If you are testing listening skills, for example, you might want to use role plays for doctors, but lectures or monologues with university students.

Test Construct – What am I testing?

Another key point is to consider what you want to test. Before designing a test, you need to identify the ability or skill that the test is designed to measure – in technical terms, the ‘test construct’. Some examples of constructs are: intelligence, personality, anxiety, English language ability, pronunciation. To take language assessment as an example, the test construct could be communicative language ability, or speaking ability, or perhaps even a construct as specific as pronunciation. The challenge is to define the construct and find ways to elicit it and measure it; for example, if we are testing the construct of fluency, we might consider features such as rate of speech, number of pauses/ hesitations and the extent to which any pauses/hesitations cause strain for a listener.


Test Tasks – How am I testing?

Once you’ve defined what you want to test, you need to decide how you’re going to test it. The focus here is on selecting the right test tasks for the ability (i.e. construct) you're interested in testing. All task types have advantages and limitations and so it’s important to use a range of tasks in order to minimize their individual limitations and optimize the measurement of the ability you’re interested in. The tasks in a test are like a menu of options that are available to choose from, and you must be sure to choose the right task or the right range of tasks for the ability you're trying to measure. 

Test Reliability - How am I scoring?

Next it’s important to consider how to score your test. A test needs to be reliable and to produce accurate scores. So, you’ll need to make sure that the scores from a test reflect a learner's actual ability. In deciding how to score a test, you’ll need to consider whether the answers are going to be scored as correct or incorrect (this might be the case for multiple–choice tasks, for example) or whether you might use a range of marks and give partial credit, as for example, in reading or listening comprehension questions. In speaking and writing, you’ll also have to decide what criteria to use (for example, grammar, vocabulary, pronunciation, essay, organisation in writing, and so on). You’ll also need to make sure that the teachers involved in speaking or writing assessment have received some training, so that they are marking to (more or less) the same standard.

Test Impact - How will my test help learners?

The final – and in many ways most important – question to ask yourself is how the test is benefitting learners. Good tests engage learners in situations similar to ones that they might face outside the classroom (i.e. authentic tasks), or which provide useful feedback or help their language development by focusing on all four skills (reading, listening, writing, speaking). For example, if a test has a speaking component, this will encourage speaking practice in the classroom. And if that speaking test includes both language production (e.g. describe a picture) and interaction (e.g. discuss a topic with another student), then preparing for the test encourages the use of a wide range of speaking activities in the classroom and enhances learning.

Adapted from: https://www.cambridgeenglish.org/blog/what-is-validity. Acesso em: 15 dez. 2023.

Para transformar a frase “A test needs to be reliable and to produce accurate scores.” em uma pergunta, utilizando uma question tag, deve-se empregar:
Alternativas
Q2471841 Inglês

What is Validity?
by Evelina Galaczi
July 17th, 2020


The fundamental concept to keep in mind when creating any assessment is validity. Validity refers to whether a test measures what it aims to measure. For example, a valid driving test should include a practical driving component and not just a theoretical test of the rules of driving. A valid language test for university entry, for example, should include tasks that are representative of at least some aspects of what actually happens in university settings, such as listening to lectures, giving presentations, engaging in tutorials, writing essays, and reading texts.

Validity has different elements, which we are now going to look at in turn.

Test Purpose – Why am I testing?

We can never really say that a test is valid or not valid. Instead, we can say that a test is valid for a particular purpose. There are several reasons why you might want to test your students. You could be trying to check their learning at the end of a unit, or trying to understand what they know and don't know. Or, you might want to use a test to place learners into groups based on their ability, or to provide test takers with a certificate of language proficiency. Each of these different reasons for testing represents a different test purpose.

The purpose of the test determines the type of test you're going to produce, which in turn affects the kinds of tasks you're going to choose, the number of test items, the length of the test, and so on. For example, a test certifying that doctors can practise in an English-speaking country would be different from a placement test which aims to place those doctors into language courses.

Test Takers – Who am I testing?

It’s also vital to keep in mind who is taking your test. Is it primary school children or teenagers or adults? Or is it airline pilots or doctors or engineers? This is an important question because the test has to be appropriate for the test takers it is aimed for. If your test takers are primary school children, for instance, you might want to give them more interactive tasks or games to test their language ability. If you are testing listening skills, for example, you might want to use role plays for doctors, but lectures or monologues with university students.

Test Construct – What am I testing?

Another key point is to consider what you want to test. Before designing a test, you need to identify the ability or skill that the test is designed to measure – in technical terms, the ‘test construct’. Some examples of constructs are: intelligence, personality, anxiety, English language ability, pronunciation. To take language assessment as an example, the test construct could be communicative language ability, or speaking ability, or perhaps even a construct as specific as pronunciation. The challenge is to define the construct and find ways to elicit it and measure it; for example, if we are testing the construct of fluency, we might consider features such as rate of speech, number of pauses/ hesitations and the extent to which any pauses/hesitations cause strain for a listener.


Test Tasks – How am I testing?

Once you’ve defined what you want to test, you need to decide how you’re going to test it. The focus here is on selecting the right test tasks for the ability (i.e. construct) you're interested in testing. All task types have advantages and limitations and so it’s important to use a range of tasks in order to minimize their individual limitations and optimize the measurement of the ability you’re interested in. The tasks in a test are like a menu of options that are available to choose from, and you must be sure to choose the right task or the right range of tasks for the ability you're trying to measure. 

Test Reliability - How am I scoring?

Next it’s important to consider how to score your test. A test needs to be reliable and to produce accurate scores. So, you’ll need to make sure that the scores from a test reflect a learner's actual ability. In deciding how to score a test, you’ll need to consider whether the answers are going to be scored as correct or incorrect (this might be the case for multiple–choice tasks, for example) or whether you might use a range of marks and give partial credit, as for example, in reading or listening comprehension questions. In speaking and writing, you’ll also have to decide what criteria to use (for example, grammar, vocabulary, pronunciation, essay, organisation in writing, and so on). You’ll also need to make sure that the teachers involved in speaking or writing assessment have received some training, so that they are marking to (more or less) the same standard.

Test Impact - How will my test help learners?

The final – and in many ways most important – question to ask yourself is how the test is benefitting learners. Good tests engage learners in situations similar to ones that they might face outside the classroom (i.e. authentic tasks), or which provide useful feedback or help their language development by focusing on all four skills (reading, listening, writing, speaking). For example, if a test has a speaking component, this will encourage speaking practice in the classroom. And if that speaking test includes both language production (e.g. describe a picture) and interaction (e.g. discuss a topic with another student), then preparing for the test encourages the use of a wide range of speaking activities in the classroom and enhances learning.

Adapted from: https://www.cambridgeenglish.org/blog/what-is-validity. Acesso em: 15 dez. 2023.

A oração “So, you’ll need to make sure that the scores from a test reflect a learner's actual ability”, reescrita na voz passiva de forma correta e sem alteração de significado ou tempo verbal, é:
Alternativas
Q2471840 Inglês

What is Validity?
by Evelina Galaczi
July 17th, 2020


The fundamental concept to keep in mind when creating any assessment is validity. Validity refers to whether a test measures what it aims to measure. For example, a valid driving test should include a practical driving component and not just a theoretical test of the rules of driving. A valid language test for university entry, for example, should include tasks that are representative of at least some aspects of what actually happens in university settings, such as listening to lectures, giving presentations, engaging in tutorials, writing essays, and reading texts.

Validity has different elements, which we are now going to look at in turn.

Test Purpose – Why am I testing?

We can never really say that a test is valid or not valid. Instead, we can say that a test is valid for a particular purpose. There are several reasons why you might want to test your students. You could be trying to check their learning at the end of a unit, or trying to understand what they know and don't know. Or, you might want to use a test to place learners into groups based on their ability, or to provide test takers with a certificate of language proficiency. Each of these different reasons for testing represents a different test purpose.

The purpose of the test determines the type of test you're going to produce, which in turn affects the kinds of tasks you're going to choose, the number of test items, the length of the test, and so on. For example, a test certifying that doctors can practise in an English-speaking country would be different from a placement test which aims to place those doctors into language courses.

Test Takers – Who am I testing?

It’s also vital to keep in mind who is taking your test. Is it primary school children or teenagers or adults? Or is it airline pilots or doctors or engineers? This is an important question because the test has to be appropriate for the test takers it is aimed for. If your test takers are primary school children, for instance, you might want to give them more interactive tasks or games to test their language ability. If you are testing listening skills, for example, you might want to use role plays for doctors, but lectures or monologues with university students.

Test Construct – What am I testing?

Another key point is to consider what you want to test. Before designing a test, you need to identify the ability or skill that the test is designed to measure – in technical terms, the ‘test construct’. Some examples of constructs are: intelligence, personality, anxiety, English language ability, pronunciation. To take language assessment as an example, the test construct could be communicative language ability, or speaking ability, or perhaps even a construct as specific as pronunciation. The challenge is to define the construct and find ways to elicit it and measure it; for example, if we are testing the construct of fluency, we might consider features such as rate of speech, number of pauses/ hesitations and the extent to which any pauses/hesitations cause strain for a listener.


Test Tasks – How am I testing?

Once you’ve defined what you want to test, you need to decide how you’re going to test it. The focus here is on selecting the right test tasks for the ability (i.e. construct) you're interested in testing. All task types have advantages and limitations and so it’s important to use a range of tasks in order to minimize their individual limitations and optimize the measurement of the ability you’re interested in. The tasks in a test are like a menu of options that are available to choose from, and you must be sure to choose the right task or the right range of tasks for the ability you're trying to measure. 

Test Reliability - How am I scoring?

Next it’s important to consider how to score your test. A test needs to be reliable and to produce accurate scores. So, you’ll need to make sure that the scores from a test reflect a learner's actual ability. In deciding how to score a test, you’ll need to consider whether the answers are going to be scored as correct or incorrect (this might be the case for multiple–choice tasks, for example) or whether you might use a range of marks and give partial credit, as for example, in reading or listening comprehension questions. In speaking and writing, you’ll also have to decide what criteria to use (for example, grammar, vocabulary, pronunciation, essay, organisation in writing, and so on). You’ll also need to make sure that the teachers involved in speaking or writing assessment have received some training, so that they are marking to (more or less) the same standard.

Test Impact - How will my test help learners?

The final – and in many ways most important – question to ask yourself is how the test is benefitting learners. Good tests engage learners in situations similar to ones that they might face outside the classroom (i.e. authentic tasks), or which provide useful feedback or help their language development by focusing on all four skills (reading, listening, writing, speaking). For example, if a test has a speaking component, this will encourage speaking practice in the classroom. And if that speaking test includes both language production (e.g. describe a picture) and interaction (e.g. discuss a topic with another student), then preparing for the test encourages the use of a wide range of speaking activities in the classroom and enhances learning.

Adapted from: https://www.cambridgeenglish.org/blog/what-is-validity. Acesso em: 15 dez. 2023.

No trecho “To take language assessment as an example, the test construct could be communicative language ability, or speaking ability, or perhaps even a construct as specific as pronunciation”, o verbo modal em destaque expressa a ideia de:
Alternativas
Q2471839 Inglês

What is Validity?
by Evelina Galaczi
July 17th, 2020


The fundamental concept to keep in mind when creating any assessment is validity. Validity refers to whether a test measures what it aims to measure. For example, a valid driving test should include a practical driving component and not just a theoretical test of the rules of driving. A valid language test for university entry, for example, should include tasks that are representative of at least some aspects of what actually happens in university settings, such as listening to lectures, giving presentations, engaging in tutorials, writing essays, and reading texts.

Validity has different elements, which we are now going to look at in turn.

Test Purpose – Why am I testing?

We can never really say that a test is valid or not valid. Instead, we can say that a test is valid for a particular purpose. There are several reasons why you might want to test your students. You could be trying to check their learning at the end of a unit, or trying to understand what they know and don't know. Or, you might want to use a test to place learners into groups based on their ability, or to provide test takers with a certificate of language proficiency. Each of these different reasons for testing represents a different test purpose.

The purpose of the test determines the type of test you're going to produce, which in turn affects the kinds of tasks you're going to choose, the number of test items, the length of the test, and so on. For example, a test certifying that doctors can practise in an English-speaking country would be different from a placement test which aims to place those doctors into language courses.

Test Takers – Who am I testing?

It’s also vital to keep in mind who is taking your test. Is it primary school children or teenagers or adults? Or is it airline pilots or doctors or engineers? This is an important question because the test has to be appropriate for the test takers it is aimed for. If your test takers are primary school children, for instance, you might want to give them more interactive tasks or games to test their language ability. If you are testing listening skills, for example, you might want to use role plays for doctors, but lectures or monologues with university students.

Test Construct – What am I testing?

Another key point is to consider what you want to test. Before designing a test, you need to identify the ability or skill that the test is designed to measure – in technical terms, the ‘test construct’. Some examples of constructs are: intelligence, personality, anxiety, English language ability, pronunciation. To take language assessment as an example, the test construct could be communicative language ability, or speaking ability, or perhaps even a construct as specific as pronunciation. The challenge is to define the construct and find ways to elicit it and measure it; for example, if we are testing the construct of fluency, we might consider features such as rate of speech, number of pauses/ hesitations and the extent to which any pauses/hesitations cause strain for a listener.


Test Tasks – How am I testing?

Once you’ve defined what you want to test, you need to decide how you’re going to test it. The focus here is on selecting the right test tasks for the ability (i.e. construct) you're interested in testing. All task types have advantages and limitations and so it’s important to use a range of tasks in order to minimize their individual limitations and optimize the measurement of the ability you’re interested in. The tasks in a test are like a menu of options that are available to choose from, and you must be sure to choose the right task or the right range of tasks for the ability you're trying to measure. 

Test Reliability - How am I scoring?

Next it’s important to consider how to score your test. A test needs to be reliable and to produce accurate scores. So, you’ll need to make sure that the scores from a test reflect a learner's actual ability. In deciding how to score a test, you’ll need to consider whether the answers are going to be scored as correct or incorrect (this might be the case for multiple–choice tasks, for example) or whether you might use a range of marks and give partial credit, as for example, in reading or listening comprehension questions. In speaking and writing, you’ll also have to decide what criteria to use (for example, grammar, vocabulary, pronunciation, essay, organisation in writing, and so on). You’ll also need to make sure that the teachers involved in speaking or writing assessment have received some training, so that they are marking to (more or less) the same standard.

Test Impact - How will my test help learners?

The final – and in many ways most important – question to ask yourself is how the test is benefitting learners. Good tests engage learners in situations similar to ones that they might face outside the classroom (i.e. authentic tasks), or which provide useful feedback or help their language development by focusing on all four skills (reading, listening, writing, speaking). For example, if a test has a speaking component, this will encourage speaking practice in the classroom. And if that speaking test includes both language production (e.g. describe a picture) and interaction (e.g. discuss a topic with another student), then preparing for the test encourages the use of a wide range of speaking activities in the classroom and enhances learning.

Adapted from: https://www.cambridgeenglish.org/blog/what-is-validity. Acesso em: 15 dez. 2023.

No trecho “Before designing a test, you need to identify the ability or skill that the test is designed to measure – in technical terms, the ‘test construct’.”, o termo em destaque classifica-se como:
Alternativas
Q2471838 Inglês

What is Validity?
by Evelina Galaczi
July 17th, 2020


The fundamental concept to keep in mind when creating any assessment is validity. Validity refers to whether a test measures what it aims to measure. For example, a valid driving test should include a practical driving component and not just a theoretical test of the rules of driving. A valid language test for university entry, for example, should include tasks that are representative of at least some aspects of what actually happens in university settings, such as listening to lectures, giving presentations, engaging in tutorials, writing essays, and reading texts.

Validity has different elements, which we are now going to look at in turn.

Test Purpose – Why am I testing?

We can never really say that a test is valid or not valid. Instead, we can say that a test is valid for a particular purpose. There are several reasons why you might want to test your students. You could be trying to check their learning at the end of a unit, or trying to understand what they know and don't know. Or, you might want to use a test to place learners into groups based on their ability, or to provide test takers with a certificate of language proficiency. Each of these different reasons for testing represents a different test purpose.

The purpose of the test determines the type of test you're going to produce, which in turn affects the kinds of tasks you're going to choose, the number of test items, the length of the test, and so on. For example, a test certifying that doctors can practise in an English-speaking country would be different from a placement test which aims to place those doctors into language courses.

Test Takers – Who am I testing?

It’s also vital to keep in mind who is taking your test. Is it primary school children or teenagers or adults? Or is it airline pilots or doctors or engineers? This is an important question because the test has to be appropriate for the test takers it is aimed for. If your test takers are primary school children, for instance, you might want to give them more interactive tasks or games to test their language ability. If you are testing listening skills, for example, you might want to use role plays for doctors, but lectures or monologues with university students.

Test Construct – What am I testing?

Another key point is to consider what you want to test. Before designing a test, you need to identify the ability or skill that the test is designed to measure – in technical terms, the ‘test construct’. Some examples of constructs are: intelligence, personality, anxiety, English language ability, pronunciation. To take language assessment as an example, the test construct could be communicative language ability, or speaking ability, or perhaps even a construct as specific as pronunciation. The challenge is to define the construct and find ways to elicit it and measure it; for example, if we are testing the construct of fluency, we might consider features such as rate of speech, number of pauses/ hesitations and the extent to which any pauses/hesitations cause strain for a listener.


Test Tasks – How am I testing?

Once you’ve defined what you want to test, you need to decide how you’re going to test it. The focus here is on selecting the right test tasks for the ability (i.e. construct) you're interested in testing. All task types have advantages and limitations and so it’s important to use a range of tasks in order to minimize their individual limitations and optimize the measurement of the ability you’re interested in. The tasks in a test are like a menu of options that are available to choose from, and you must be sure to choose the right task or the right range of tasks for the ability you're trying to measure. 

Test Reliability - How am I scoring?

Next it’s important to consider how to score your test. A test needs to be reliable and to produce accurate scores. So, you’ll need to make sure that the scores from a test reflect a learner's actual ability. In deciding how to score a test, you’ll need to consider whether the answers are going to be scored as correct or incorrect (this might be the case for multiple–choice tasks, for example) or whether you might use a range of marks and give partial credit, as for example, in reading or listening comprehension questions. In speaking and writing, you’ll also have to decide what criteria to use (for example, grammar, vocabulary, pronunciation, essay, organisation in writing, and so on). You’ll also need to make sure that the teachers involved in speaking or writing assessment have received some training, so that they are marking to (more or less) the same standard.

Test Impact - How will my test help learners?

The final – and in many ways most important – question to ask yourself is how the test is benefitting learners. Good tests engage learners in situations similar to ones that they might face outside the classroom (i.e. authentic tasks), or which provide useful feedback or help their language development by focusing on all four skills (reading, listening, writing, speaking). For example, if a test has a speaking component, this will encourage speaking practice in the classroom. And if that speaking test includes both language production (e.g. describe a picture) and interaction (e.g. discuss a topic with another student), then preparing for the test encourages the use of a wide range of speaking activities in the classroom and enhances learning.

Adapted from: https://www.cambridgeenglish.org/blog/what-is-validity. Acesso em: 15 dez. 2023.

A oração condicional “If you are testing listening skills, for example, you might want to use role plays for doctors, but lectures or monologues with university students.” classifica-se como:
Alternativas
Q2471837 Inglês

What is Validity?
by Evelina Galaczi
July 17th, 2020


The fundamental concept to keep in mind when creating any assessment is validity. Validity refers to whether a test measures what it aims to measure. For example, a valid driving test should include a practical driving component and not just a theoretical test of the rules of driving. A valid language test for university entry, for example, should include tasks that are representative of at least some aspects of what actually happens in university settings, such as listening to lectures, giving presentations, engaging in tutorials, writing essays, and reading texts.

Validity has different elements, which we are now going to look at in turn.

Test Purpose – Why am I testing?

We can never really say that a test is valid or not valid. Instead, we can say that a test is valid for a particular purpose. There are several reasons why you might want to test your students. You could be trying to check their learning at the end of a unit, or trying to understand what they know and don't know. Or, you might want to use a test to place learners into groups based on their ability, or to provide test takers with a certificate of language proficiency. Each of these different reasons for testing represents a different test purpose.

The purpose of the test determines the type of test you're going to produce, which in turn affects the kinds of tasks you're going to choose, the number of test items, the length of the test, and so on. For example, a test certifying that doctors can practise in an English-speaking country would be different from a placement test which aims to place those doctors into language courses.

Test Takers – Who am I testing?

It’s also vital to keep in mind who is taking your test. Is it primary school children or teenagers or adults? Or is it airline pilots or doctors or engineers? This is an important question because the test has to be appropriate for the test takers it is aimed for. If your test takers are primary school children, for instance, you might want to give them more interactive tasks or games to test their language ability. If you are testing listening skills, for example, you might want to use role plays for doctors, but lectures or monologues with university students.

Test Construct – What am I testing?

Another key point is to consider what you want to test. Before designing a test, you need to identify the ability or skill that the test is designed to measure – in technical terms, the ‘test construct’. Some examples of constructs are: intelligence, personality, anxiety, English language ability, pronunciation. To take language assessment as an example, the test construct could be communicative language ability, or speaking ability, or perhaps even a construct as specific as pronunciation. The challenge is to define the construct and find ways to elicit it and measure it; for example, if we are testing the construct of fluency, we might consider features such as rate of speech, number of pauses/ hesitations and the extent to which any pauses/hesitations cause strain for a listener.


Test Tasks – How am I testing?

Once you’ve defined what you want to test, you need to decide how you’re going to test it. The focus here is on selecting the right test tasks for the ability (i.e. construct) you're interested in testing. All task types have advantages and limitations and so it’s important to use a range of tasks in order to minimize their individual limitations and optimize the measurement of the ability you’re interested in. The tasks in a test are like a menu of options that are available to choose from, and you must be sure to choose the right task or the right range of tasks for the ability you're trying to measure. 

Test Reliability - How am I scoring?

Next it’s important to consider how to score your test. A test needs to be reliable and to produce accurate scores. So, you’ll need to make sure that the scores from a test reflect a learner's actual ability. In deciding how to score a test, you’ll need to consider whether the answers are going to be scored as correct or incorrect (this might be the case for multiple–choice tasks, for example) or whether you might use a range of marks and give partial credit, as for example, in reading or listening comprehension questions. In speaking and writing, you’ll also have to decide what criteria to use (for example, grammar, vocabulary, pronunciation, essay, organisation in writing, and so on). You’ll also need to make sure that the teachers involved in speaking or writing assessment have received some training, so that they are marking to (more or less) the same standard.

Test Impact - How will my test help learners?

The final – and in many ways most important – question to ask yourself is how the test is benefitting learners. Good tests engage learners in situations similar to ones that they might face outside the classroom (i.e. authentic tasks), or which provide useful feedback or help their language development by focusing on all four skills (reading, listening, writing, speaking). For example, if a test has a speaking component, this will encourage speaking practice in the classroom. And if that speaking test includes both language production (e.g. describe a picture) and interaction (e.g. discuss a topic with another student), then preparing for the test encourages the use of a wide range of speaking activities in the classroom and enhances learning.

Adapted from: https://www.cambridgeenglish.org/blog/what-is-validity. Acesso em: 15 dez. 2023.

No trecho “...a placement test which aims to place those doctors into language courses”, o pronome demonstrativo em destaque poderia ser substituído, mantendo a concordância, por:
Alternativas
Q2471836 Inglês

What is Validity?
by Evelina Galaczi
July 17th, 2020


The fundamental concept to keep in mind when creating any assessment is validity. Validity refers to whether a test measures what it aims to measure. For example, a valid driving test should include a practical driving component and not just a theoretical test of the rules of driving. A valid language test for university entry, for example, should include tasks that are representative of at least some aspects of what actually happens in university settings, such as listening to lectures, giving presentations, engaging in tutorials, writing essays, and reading texts.

Validity has different elements, which we are now going to look at in turn.

Test Purpose – Why am I testing?

We can never really say that a test is valid or not valid. Instead, we can say that a test is valid for a particular purpose. There are several reasons why you might want to test your students. You could be trying to check their learning at the end of a unit, or trying to understand what they know and don't know. Or, you might want to use a test to place learners into groups based on their ability, or to provide test takers with a certificate of language proficiency. Each of these different reasons for testing represents a different test purpose.

The purpose of the test determines the type of test you're going to produce, which in turn affects the kinds of tasks you're going to choose, the number of test items, the length of the test, and so on. For example, a test certifying that doctors can practise in an English-speaking country would be different from a placement test which aims to place those doctors into language courses.

Test Takers – Who am I testing?

It’s also vital to keep in mind who is taking your test. Is it primary school children or teenagers or adults? Or is it airline pilots or doctors or engineers? This is an important question because the test has to be appropriate for the test takers it is aimed for. If your test takers are primary school children, for instance, you might want to give them more interactive tasks or games to test their language ability. If you are testing listening skills, for example, you might want to use role plays for doctors, but lectures or monologues with university students.

Test Construct – What am I testing?

Another key point is to consider what you want to test. Before designing a test, you need to identify the ability or skill that the test is designed to measure – in technical terms, the ‘test construct’. Some examples of constructs are: intelligence, personality, anxiety, English language ability, pronunciation. To take language assessment as an example, the test construct could be communicative language ability, or speaking ability, or perhaps even a construct as specific as pronunciation. The challenge is to define the construct and find ways to elicit it and measure it; for example, if we are testing the construct of fluency, we might consider features such as rate of speech, number of pauses/ hesitations and the extent to which any pauses/hesitations cause strain for a listener.


Test Tasks – How am I testing?

Once you’ve defined what you want to test, you need to decide how you’re going to test it. The focus here is on selecting the right test tasks for the ability (i.e. construct) you're interested in testing. All task types have advantages and limitations and so it’s important to use a range of tasks in order to minimize their individual limitations and optimize the measurement of the ability you’re interested in. The tasks in a test are like a menu of options that are available to choose from, and you must be sure to choose the right task or the right range of tasks for the ability you're trying to measure. 

Test Reliability - How am I scoring?

Next it’s important to consider how to score your test. A test needs to be reliable and to produce accurate scores. So, you’ll need to make sure that the scores from a test reflect a learner's actual ability. In deciding how to score a test, you’ll need to consider whether the answers are going to be scored as correct or incorrect (this might be the case for multiple–choice tasks, for example) or whether you might use a range of marks and give partial credit, as for example, in reading or listening comprehension questions. In speaking and writing, you’ll also have to decide what criteria to use (for example, grammar, vocabulary, pronunciation, essay, organisation in writing, and so on). You’ll also need to make sure that the teachers involved in speaking or writing assessment have received some training, so that they are marking to (more or less) the same standard.

Test Impact - How will my test help learners?

The final – and in many ways most important – question to ask yourself is how the test is benefitting learners. Good tests engage learners in situations similar to ones that they might face outside the classroom (i.e. authentic tasks), or which provide useful feedback or help their language development by focusing on all four skills (reading, listening, writing, speaking). For example, if a test has a speaking component, this will encourage speaking practice in the classroom. And if that speaking test includes both language production (e.g. describe a picture) and interaction (e.g. discuss a topic with another student), then preparing for the test encourages the use of a wide range of speaking activities in the classroom and enhances learning.

Adapted from: https://www.cambridgeenglish.org/blog/what-is-validity. Acesso em: 15 dez. 2023.

No trecho “Validity has different elements, which we are now going to look at in turn”, o phrasal verb em destaque é definido como:
Alternativas
Q2471835 Inglês

What is Validity?
by Evelina Galaczi
July 17th, 2020


The fundamental concept to keep in mind when creating any assessment is validity. Validity refers to whether a test measures what it aims to measure. For example, a valid driving test should include a practical driving component and not just a theoretical test of the rules of driving. A valid language test for university entry, for example, should include tasks that are representative of at least some aspects of what actually happens in university settings, such as listening to lectures, giving presentations, engaging in tutorials, writing essays, and reading texts.

Validity has different elements, which we are now going to look at in turn.

Test Purpose – Why am I testing?

We can never really say that a test is valid or not valid. Instead, we can say that a test is valid for a particular purpose. There are several reasons why you might want to test your students. You could be trying to check their learning at the end of a unit, or trying to understand what they know and don't know. Or, you might want to use a test to place learners into groups based on their ability, or to provide test takers with a certificate of language proficiency. Each of these different reasons for testing represents a different test purpose.

The purpose of the test determines the type of test you're going to produce, which in turn affects the kinds of tasks you're going to choose, the number of test items, the length of the test, and so on. For example, a test certifying that doctors can practise in an English-speaking country would be different from a placement test which aims to place those doctors into language courses.

Test Takers – Who am I testing?

It’s also vital to keep in mind who is taking your test. Is it primary school children or teenagers or adults? Or is it airline pilots or doctors or engineers? This is an important question because the test has to be appropriate for the test takers it is aimed for. If your test takers are primary school children, for instance, you might want to give them more interactive tasks or games to test their language ability. If you are testing listening skills, for example, you might want to use role plays for doctors, but lectures or monologues with university students.

Test Construct – What am I testing?

Another key point is to consider what you want to test. Before designing a test, you need to identify the ability or skill that the test is designed to measure – in technical terms, the ‘test construct’. Some examples of constructs are: intelligence, personality, anxiety, English language ability, pronunciation. To take language assessment as an example, the test construct could be communicative language ability, or speaking ability, or perhaps even a construct as specific as pronunciation. The challenge is to define the construct and find ways to elicit it and measure it; for example, if we are testing the construct of fluency, we might consider features such as rate of speech, number of pauses/ hesitations and the extent to which any pauses/hesitations cause strain for a listener.


Test Tasks – How am I testing?

Once you’ve defined what you want to test, you need to decide how you’re going to test it. The focus here is on selecting the right test tasks for the ability (i.e. construct) you're interested in testing. All task types have advantages and limitations and so it’s important to use a range of tasks in order to minimize their individual limitations and optimize the measurement of the ability you’re interested in. The tasks in a test are like a menu of options that are available to choose from, and you must be sure to choose the right task or the right range of tasks for the ability you're trying to measure. 

Test Reliability - How am I scoring?

Next it’s important to consider how to score your test. A test needs to be reliable and to produce accurate scores. So, you’ll need to make sure that the scores from a test reflect a learner's actual ability. In deciding how to score a test, you’ll need to consider whether the answers are going to be scored as correct or incorrect (this might be the case for multiple–choice tasks, for example) or whether you might use a range of marks and give partial credit, as for example, in reading or listening comprehension questions. In speaking and writing, you’ll also have to decide what criteria to use (for example, grammar, vocabulary, pronunciation, essay, organisation in writing, and so on). You’ll also need to make sure that the teachers involved in speaking or writing assessment have received some training, so that they are marking to (more or less) the same standard.

Test Impact - How will my test help learners?

The final – and in many ways most important – question to ask yourself is how the test is benefitting learners. Good tests engage learners in situations similar to ones that they might face outside the classroom (i.e. authentic tasks), or which provide useful feedback or help their language development by focusing on all four skills (reading, listening, writing, speaking). For example, if a test has a speaking component, this will encourage speaking practice in the classroom. And if that speaking test includes both language production (e.g. describe a picture) and interaction (e.g. discuss a topic with another student), then preparing for the test encourages the use of a wide range of speaking activities in the classroom and enhances learning.

Adapted from: https://www.cambridgeenglish.org/blog/what-is-validity. Acesso em: 15 dez. 2023.

No trecho “For example, a valid driving test should include a practical driving component and not just a theoretical test of the rules of driving”, o termo em destaque é classificado como: 
Alternativas
Q2471834 Inglês

What is Validity?
by Evelina Galaczi
July 17th, 2020


The fundamental concept to keep in mind when creating any assessment is validity. Validity refers to whether a test measures what it aims to measure. For example, a valid driving test should include a practical driving component and not just a theoretical test of the rules of driving. A valid language test for university entry, for example, should include tasks that are representative of at least some aspects of what actually happens in university settings, such as listening to lectures, giving presentations, engaging in tutorials, writing essays, and reading texts.

Validity has different elements, which we are now going to look at in turn.

Test Purpose – Why am I testing?

We can never really say that a test is valid or not valid. Instead, we can say that a test is valid for a particular purpose. There are several reasons why you might want to test your students. You could be trying to check their learning at the end of a unit, or trying to understand what they know and don't know. Or, you might want to use a test to place learners into groups based on their ability, or to provide test takers with a certificate of language proficiency. Each of these different reasons for testing represents a different test purpose.

The purpose of the test determines the type of test you're going to produce, which in turn affects the kinds of tasks you're going to choose, the number of test items, the length of the test, and so on. For example, a test certifying that doctors can practise in an English-speaking country would be different from a placement test which aims to place those doctors into language courses.

Test Takers – Who am I testing?

It’s also vital to keep in mind who is taking your test. Is it primary school children or teenagers or adults? Or is it airline pilots or doctors or engineers? This is an important question because the test has to be appropriate for the test takers it is aimed for. If your test takers are primary school children, for instance, you might want to give them more interactive tasks or games to test their language ability. If you are testing listening skills, for example, you might want to use role plays for doctors, but lectures or monologues with university students.

Test Construct – What am I testing?

Another key point is to consider what you want to test. Before designing a test, you need to identify the ability or skill that the test is designed to measure – in technical terms, the ‘test construct’. Some examples of constructs are: intelligence, personality, anxiety, English language ability, pronunciation. To take language assessment as an example, the test construct could be communicative language ability, or speaking ability, or perhaps even a construct as specific as pronunciation. The challenge is to define the construct and find ways to elicit it and measure it; for example, if we are testing the construct of fluency, we might consider features such as rate of speech, number of pauses/ hesitations and the extent to which any pauses/hesitations cause strain for a listener.


Test Tasks – How am I testing?

Once you’ve defined what you want to test, you need to decide how you’re going to test it. The focus here is on selecting the right test tasks for the ability (i.e. construct) you're interested in testing. All task types have advantages and limitations and so it’s important to use a range of tasks in order to minimize their individual limitations and optimize the measurement of the ability you’re interested in. The tasks in a test are like a menu of options that are available to choose from, and you must be sure to choose the right task or the right range of tasks for the ability you're trying to measure. 

Test Reliability - How am I scoring?

Next it’s important to consider how to score your test. A test needs to be reliable and to produce accurate scores. So, you’ll need to make sure that the scores from a test reflect a learner's actual ability. In deciding how to score a test, you’ll need to consider whether the answers are going to be scored as correct or incorrect (this might be the case for multiple–choice tasks, for example) or whether you might use a range of marks and give partial credit, as for example, in reading or listening comprehension questions. In speaking and writing, you’ll also have to decide what criteria to use (for example, grammar, vocabulary, pronunciation, essay, organisation in writing, and so on). You’ll also need to make sure that the teachers involved in speaking or writing assessment have received some training, so that they are marking to (more or less) the same standard.

Test Impact - How will my test help learners?

The final – and in many ways most important – question to ask yourself is how the test is benefitting learners. Good tests engage learners in situations similar to ones that they might face outside the classroom (i.e. authentic tasks), or which provide useful feedback or help their language development by focusing on all four skills (reading, listening, writing, speaking). For example, if a test has a speaking component, this will encourage speaking practice in the classroom. And if that speaking test includes both language production (e.g. describe a picture) and interaction (e.g. discuss a topic with another student), then preparing for the test encourages the use of a wide range of speaking activities in the classroom and enhances learning.

Adapted from: https://www.cambridgeenglish.org/blog/what-is-validity. Acesso em: 15 dez. 2023.

O texto apresenta um conceito de validade que deve ser aplicado na prática pedagógica de todos os professores durante o processo de:
Alternativas
Q2471833 Educação Física
Para construir um processo de ensino e aprendizagem potente, é fundamental problematizar as diversas questões que atravessam o chão da escola. Nesse sentido, Nunes e Neira (2016) ressaltam que uma Educação Física escolar ancorada nos Estudos Culturais, de certa forma, equipara:
Alternativas
Q2471832 Educação Física
Para a construção de um processo pedagógico potente, é fundamental que o docente domine o conjunto de perspectivas teóricas da Educação Física escolar. Nesse sentido, Daolio (2004) destaca que a abordagem crítico-superadora é deficiente ao tratar a: 
Alternativas
Respostas
3601: A
3602: C
3603: C
3604: D
3605: B
3606: A
3607: B
3608: D
3609: D
3610: A
3611: D
3612: C
3613: B
3614: B
3615: C
3616: D
3617: C
3618: A
3619: A
3620: D