First, pick an area of interest that relates to student achievement or educational outcomes. Tell us a bit about the area and why you think it is important to do research in the area. Then, design two different studies one using quantitative measures and one using qualitative methods. In your post, please follow the following format:
Identify the topic area you want to study.
Describe your quantitative or experimental approach. First, discuss the research question that you would like to answer using this approach. Then, discuss the approach you might take and the implications of your potential findings. Please also discuss challenges and limitations to this approach!
Describe your qualitative approach. First, discuss the research question that you would like to answer using this approach. Remember that this is most likely a different question than you used for your quantitative study. Think about what question is best suited for a qualitative study. Discuss the merits of this approach and how it will help answer your research question. Discuss the approach you might take the implications of your potential findings. Please also discuss challenges and limitations to this approach!
Finally, discuss how these two approaches differ and/or complement each other and how they help you gain knowledge about the topic area you chose.
Article 1
True and Quasi-Experimental Designs
Barry Gribbons and Joan Herman
Source: Gribbons, B., & Herman, J. (1997). True and quasi-experimental designs. Washington, DC: ERIC Clearinghouse on Assessment and Evaluation. [ED421483]
Return to: | Readings in Educational Psychology | Educational Psychology Interactive |
Experimental designs are especially useful in addressing evaluation questions about the effectiveness and impact of programs. Emphasizing the use of comparative data as context for interpreting findings, experimental designs increase our confidence that observed outcomes are the result of a given program or innovation instead of a function of extraneous variables or events. For example, experimental designs help us to answer such questions as the following: Would adopting a new integrated reading program improve student performance? Is TQM having a positive impact on student achievement and faculty satisfaction? Is the parent involvement program influencing parents’ engagement in and satisfaction with schools? How is the school’s professional development program influencing teacher’s collegiality and classroom practice?
As one can see from the example questions above, designs specify from whom information is to be collected and when it is to be collected. Among the different types of experimental design, there are two general categories:
- true experimental design: This category of design includes more than one purposively created group, common measured outcome(s), and random assignment.
- quasi-experimental design: This category of design is most frequently used when it is not feasible for the researcher to use random assignment.
This digest describes the strengths and limitations of specific types of quasi-experimental and true experimental design.
Quasi-experimental Designs In Evaluation
As stated previously, quasi-experimental designs are commonly employed in the evaluation of educational programs when random assignment is not possible or practical. Although quasi-experimental designs need to be used commonly, they are subject to numerous interpretation problems. Frequently used types of quasi-experimental designs include the following:
Nonequivalent group, posttest only (Quasi-experimental). The nonequivalent, posttest only design consists of administering an outcome measure to two groups or to a program/treatment group and a comparison. For example, one group of students might receive reading instruction using a whole language program while the other receives a phonetics-based program. After twelve weeks, a reading comprehension test can be administered to see which program was more effective.
A major problem with this design is that the two groups might not be necessarily the same before any instruction takes place and may differ in important ways that influence what reading progress they are able to make. For instance, if it is found that the students in the phonetics groups perform better, there is no way of determining if they are better prepared or better readers even before the program and/or whether other factors are influential to their growth.
Nonequivalent group, pretest-posttest. The nonequivalent group, pretest-posttest design partially eliminates a major limitation of the nonequivalent group, posttest only design. At the start of the study, the researcher empirically assesses the differences in the two groups. Therefore, if the researcher finds that one group performs better than the other on the posttest, s/he can rule out initial differences (if the groups were in fact similar on the pretest) and normal development (e.g. resulting from typical home literacy practices or other instruction) as explanations for the differences.
Some problems still might result from students in the comparison group being incidentally exposed to the treatment condition, being more motivated than students in the other group, having more motivated or involved parents, etc. Additional problems may result from discovering that the two groups do differ on the pretest measure. If groups differ at the onset of the study, any differences that occur in test scores at the conclusion are difficult to interpret.
Time series designs. In time series designs, several assessments (or measurements) are obtained from the treatment group as well as from the control group. This occurs prior to and after the application of the treatment. The series of observations before and after can provide rich information about students’ growth. Because measures at several points in time prior and subsequent to the program are likely to provide a more reliable picture of achievement, the time series design is sensitive to trends in performance. Thus, this design, especially if a comparison group of similar students is used, provides a strong picture of the outcomes of interest. Nevertheless, although to a lesser degree, limitations and problems of the nonequivalent group, pretest-posttest design still apply to this design.
True Experimental Designs
The strongest comparisons come from true experimental designs in which subjects (students, teachers, classrooms, schools, etc.) are randomly assigned to program and comparison groups. It is only through random assignment that evaluators can be assured that groups are truly comparable and that observed differences in outcomes are not the result of extraneous factors or pre-existing differences. For example, without random assignment, what inference can we draw from findings that students in reform classrooms outperformed students in non-reform classrooms if we suspect that the reform teachers were more qualified, innovative, and effective prior to the reform? Do we attribute the observed difference to the reform program or to pre-existing differences between groups? In the former case, the reform appears to be effective, likely worth the investment, and possibly justifying expansion; in the latter case, alternative inferences are warranted. There are several types of true experimental design:
Posttest Only, Control Group. Posttest only, control group designs differ from previously discussed designs in that subjects are randomly assigned to one of the two groups. Given sufficient numbers of subjects, randomization helps to assure that the two groups (or conditions, raters, occasions, etc.) are comparable or equivalent in terms of characteristics which could affect any observed differences in posttest scores. Although a pretest can be used to assess or confirm whether the two groups were initially the same on the outcome of interest(as in pretest-posttest, control group designs), a pretest is likely unnecessary when randomization is used and large numbers of students and/or teachers are involved. With smaller samples, pretesting may be advisable to check on the equivalence of the groups. Other Designs. Some other general types of designs include counterbalanced and matched subjects (for a more detailed discussion of different designs see Campbell & Stanley, 1966). With counterbalanced designs, all groups participate in more than one randomly ordered treatment (and control) conditions. In matched designs, pairs of students matched on important characteristics (for example, pretest scores or demographic variables) are assigned to one of the two treatment conditions. These approaches are effective if randomization is employed.
Even experimental designs, however, can be problematic even when true experimental designs are employed (Cook & Campbell, 1979). One threat is that the control group can be inadvertently exposed to the program; such a threat also occurs when key aspects of the program also exist in the comparison group. Additionally, one of the conditions (groups), such as instructional programs may be perceived as more desirable than the other. If participants in the study learn of the other group, then important motivational differences (being demoralized or even trying harder to compensate) could impact the results. Differences in the quality with which a program or comparison treatment is implemented also can influence results (the teachers implementing one or the other have greater content or pedagogical knowledge). Still another threat to the validity of a design is differential participant mortality in the two groups.
Limitations Of True Experimental Design
Experimental designs also are limited by narrow range of evaluation purposes they address. When conducting an evaluation, the researcher certainly needs to develop adequate descriptions of programs, as they were intended as well as how they were realized in the specific setting. Also, the researcher frequently needs to provide timely, responsive feedback for purposes of program development or improvement. Although less common, access and equity issues within a critical theory framework may be important. Experimental designs do not address these facets of evaluation.
With complex educational programs, rarely can we control all the important variables which are likely to influence program outcomes, even with the best experimental design. Nor can the researcher necessarily be sure, without verification, that the implemented program was really different in important ways from the program of the comparison group(s), or that the implemented program (not other contemporaneous factors or events) produced the observed results. Being mindful of these issues, it is important for evaluators not to develop a false sense of security.
Finally, even when the purpose of the evaluation is to assess the impact of a program, logistical and feasibility issues constrain experimental frameworks. Randomly assigning students in educational settings frequently is not realistic, especially when the different conditions are viewed as more or less desirable. This often leads the researcher to use quasi-experimental designs. Problems associated with the lack of randomization are exacerbated as the researcher begins to realize that the programs and settings are in fact dynamic, constantly changing, and almost always unstandardized.
Recommendations For Evaluation
The primary factor which directs the evaluation design is the purpose for the evaluation. Restated, it is critical to consider the utility of any evaluation information. If the program’s impact on participant outcomes is a key concern or if multiple programs (instructional strategies, or something else) are being considered and educators are looking for evidence to assess the relative effectiveness of each to inform decisions about which approach to select, then experimental designs are appropriate and necessary. Nonetheless, resulting information should be augmented by rich descriptions of programs and mechanisms need to be established which enable providing timely, responsive feedback (For a detailed discussion of other approaches to evaluation, see Lincoln & Guba, 1985; Patton, 1997, and Reinhart & Rallis, 1994).
In addition to using multiple evaluation methods, evaluators should be careful in collecting the right kinds of information when using experimental frameworks. Measures must be aligned with the program’s goals or objectives. Additionally, it is often much more powerful to employ multiple measures. Triangulating several lines of evidence or measures in answering specific evaluation questions about program outcomes increases the reliability and credibility of results. Furthermore, when interpreting this evidence, it is often useful to use absolute standards of success in addition to relative comparisons. The last recommendation is to always consider alternative explanations for any observed differences in outcome measures. If the treatment group outperforms the control group, consider a full range of plausible explanations in addition to the claim that the innovative practice is more effective. Program staff and participants can be very helpful in identifying these alternative explanations and evaluating the plausibility of each.
Additional Reading
- Campbell, D.T. & Stanley, J.C. (1966). Experimental and quasi-experimental designs for research. Chicago: Rand McNally College Pub. Co.
- Cook, T.D. & Campbell, D.T. (1979). Quasi-experimentation: design and analysis issues for field settings. Chicago: Rand McNally College Pub. Co.
- Lincoln, Y.S. & Guba, E.G. (1985). Naturalistic inquiry. Beverly Hills: Sage Publications.
- Patton, M.Q. (1997). Utilization focused evaluation (3rd ed.). Thousand Oaks, CA: Sage Publications.
- Reinhart, C.S. & Rallis, S.F. (1994). The qualitative-quantitative debate: New perspectives. San Francisco: Jossey-Bass.
Article 2
Comparative Research Methods
Linda Hantrais
Linda Hantrais is Director of the European Research Centre, Loughborough University. She is convenor of the Cross-National Research Group and series editor of Cross-National Research Papers. The main focus of her research is cross-national theory, method and practice, particularly with reference to social policy. She has conducted a number of comparative studies, including ESRC/CNAF/European Commission-funded collaborative projects on women in professional occupations in Britain and France and on families and family policies in Europe. Her recent publications include a co-edited book, with Steen Mangen, on Cross-National Research Methods in the Social Sciences (Pinter, 1996).
Key points
- Comparative research methods have long been used in cross-cultural studies to identify, analyse and explain similarities and differences across societies.
- Whatever the methods used, research that crosses national boundaries increasingly takes account of socio-cultural settings.
- Problems arise in managing and funding cross-national projects, in gaining access to comparable datasets and in achieving agreement over conceptual and functional equivalence and research parameters.
- Attempts to find solutions to these problems involve negotiation and compromise and a sound knowledge of different national contexts.
- The benefits to be gained from cross-national work include a deeper understanding of other cultures and of their research processes.
The comparative approach to the study of society has a long tradition dating back to Ancient Greece. Since the nineteenth century, philosophers, anthropologists, political scientists and sociologists have used cross-cultural comparisons to achieve various objectives.
For researchers adopting a normative perspective, comparisons have served as a tool for developing classifications of social phenomena and for establishing whether shared phenomena can be explained by the same causes. For many sociologists, comparisons have provided an analytical framework for examining (and explaining) social and cultural differences and specificity. More recently, as greater emphasis has been placed on contextualisation, cross-national comparisons have served increasingly as a means of gaining a better understanding of different societies, their structures and institutions.
The development of this third approach has coincided with the growth in interdisciplinary and international collaboration and networking in the social sciences, which has been encouraged since the 1970s by a number of European-wide initiatives. The European Commission has established several large-scale programmes, and observatories and networks have been set up to monitor and report on social and economic developments in member states. At the same time, government departments and research funding bodies have shown a growing interest in international comparisons, particularly in the social policy area, often as a means of evaluating the solutions adopted for dealing with common problems or to assess the transferability of policies between member states.
Yet, relatively few social scientists feel they are well equipped to conduct studies that seek to cross national boundaries, or to work in international teams. This reluctance may be explained not only by a lack of knowledge or understanding of different cultures and languages but also by insufficient awareness of the research traditions and processes operating in different national contexts.
Approaches to cross-national research
For the purposes of this article, a study is held to be cross-national and comparative, when individuals or teams set out to examine particular issues or phenomena in two or more countries with the express intention of comparing their manifestations in different socio-cultural settings (institutions, customs, traditions, value systems, lifestyles, language, thought patterns), using the same research instruments either to carry out secondary analysis of national data or to conduct new empirical work. The aim may be to seek explanations for similarities and differences, to generalise from them or to gain a greater awareness and a deeper understanding of social reality in different national contexts.
In many respects, the methods adopted in cross-national comparative research are no different from those used for within-nation comparisons or for other areas of sociological research. The descriptive or survey method, which will usually result in a state of the art review, is generally the first stage in any large-scale international comparative project, such as those carried out by the European observatories and networks. A juxtaposition approach is often adopted at this stage: data gathered by individuals or teams, according to agreed criteria, and derived either from existing materials or new empirical work, are presented side by side frequently without being systematically compared.
Some large-scale projects are intended to be explanatory from the outset and therefore focus on the degree of variability observed from one national sample to another. Such projects may draw on several methods: the inductive method, starting from loosely defined hypotheses and moving towards their verification; the deductive method, applying a general theory to a specific case in order to interpret certain aspects; and the demonstrative method, designed to confirm and refine a theory.
Rather than each researcher or group of researchers investigating their own national context and then pooling information, a single researcher or single-nation team of researchers the ‘safari’ approach may formulate the problem and research hypotheses and carry out studies in more than one country, using replication of the experimental design, generally to collect and analyse new data. The method is often adopted when a smaller number of countries is involved and for more qualitative studies, where researchers are looking at a well-defined issue in two or more national contexts and are required to have intimate knowledge of all the countries under study. The approach may combine surveys, secondary analysis of national data, and also personal observation and an interpretation of the findings in relation to their wider social contexts.
Irrespective of the organisational structure of the research, a shift is occurring in emphasis away from descriptive, universalist and ‘culture-free’ approaches to social phenomena. The societal approach, which has perhaps been most fully explicated in relation to industrial sociology (Maurice et al., 1986), implies that the researcher sets out to identify the specificity of social forms and institutional structures in different societies and to look for explanations of differences by referring to the wider social context. Another result of the greater emphasis on contextualisation in comparative studies is their increasingly interdisciplinary and multidisciplinary character, since a wide range of factors must be considered at the lowest possible level of disaggregation.
Problems in cross-national comparative research
The shift in orientation towards a more interpretative, culture-bound approach means that linguistic and cultural factors, together with differences in research traditions and administrative structures cannot be ignored. If these problems go unresolved, they are likely to affect the quality of the results of the whole project, since the researcher runs the risk of losing control over the construction and analysis of key variables.
Managing and funding cross-national projects
The mix of countries selected in comparative studies affects the quality and comparability of the data as well as the nature of the collaboration between researchers. In ideal conditions, a project team manager will be able to select the countries to be included in the study and researchers with appropriate knowledge and expertise to undertake the work. In small-scale bilateral comparisons, this may be feasible, but more often the reality is different, and participation may be determined by factors (sometimes political) which do not make for easy relationships between team members. European programmes often include all EU member states, although the countries concerned may represent very different stages of economic and social development and be influenced by different cultural value systems, assumptions and thought patterns.
The financial resources available for the research differ considerably from one national context to another. Funding bodies have their own agenda: a topic that may attract interest in one country may not obtain funding elsewhere.
The amount of time that can be allocated to the research, the ease with which reliable data can be obtained and the relative expense involved are also likely to affect the quality of the material for comparisons.
The problems of organising meetings which all participants in a project can attend, of negotiating a research agenda, of reaching agreement on approaches and definitions and of ensuring that they are observed are not to be underestimated. Linguistic and cultural affinity is central to an understanding of why researchers from some national groups find it easier to work together and to reach agreement on research topics, design and instruments. Even within a single discipline, differences in the research traditions of participating countries may affect the results of a collaborative project and the quality of any joint publications.
Accessing comparable data
In many European projects, national experts are required to provide descriptive accounts of selected trends and developments derived from national data sources. The co-ordinators then synthesise information on key themes and issues (see for example, Ditch et al., 1995). Since much of the international work carried out at European level is not strictly comparative at the design and data collection stages, the findings cannot then be compared systematically. Data collection is strongly influenced by national conventions. Their source, the purpose for which they were gathered, the criteria used and the method of collection may vary considerably from one country to another, and the criteria adopted for coding data may change over time.
In some areas, national records may be non-existent or may not go back very far. For certain topics, information may be routinely collected in tailor-made surveys in a number of the participating countries, whereas in others it may be more limited because the topic has attracted less attention among policy-makers. Official statistics may be produced in too highly aggregated a form and may not have been collected systematically over time. In many multinational studies, much time and effort is expended on trying to reduce classifications to a common base.
Concepts and research parameters
Despite considerable progress in the development of large-scale harmonised international databases, such as Eurostat, which tend to give the impression that quantitative comparisons are unproblematic, attempts at cross-national comparisons are still too often rendered ineffectual by the lack of a common understanding of central concepts and the societal contexts within which phenomena are located. Agreement is therefore difficult to reach over research parameters and units of comparison.
For example, the demographic and employment statistics compiled at European level are socially constructed and often conceal quite different national situations (Hantrais and Letablier, 1996). Even the definition of a country or society can be problematic, since there is no single identifiable, durable and relatively stable sociological unit equivalent to the total geographical territory of a nation.
Language can present a major obstacle to effective international collaboration, since it is not simply a medium for conveying concepts, but part of the conceptual system, reflecting institutions, thought processes, values and ideology, and implying that the approach to a topic and interpretations of it will differ according to the language of expression.
Although defining a time span may appear to be a simple matter for a longitudinal study, innumerable problems can arise when national datasets are being used. These problems are compounded when comparisons are based on secondary analysis of existing national datasets, since it may not always be possible to apply agreed criteria uniformly.
Solutions to the problems of cross-national comparisons
Most researchers engaged in cross-national comparative work admit that such research, by its very nature, demands greater compromises in methods than a single-country focus.
The problems of building and managing a research team can often be resolved only by a process of trial and error, and the quality of the contributions to multinational projects may be very uneven. The managerial skills and experience of the co-ordinators are, therefore, critical in holding the team together, in obtaining material and providing the comparative framework for the research, which also requires a sound knowledge and understanding of other national contexts, their languages and intellectual traditions.
When existing large-scale data are being re-analysed, the solution is not to disregard major demographic variables, since they may indicate greater intranational than international differences. An attempt has to be made to establish comparable groupings from the most detailed information available the raw data and to focus on the broader characteristics of the sample.
The solution to the problem of defining the unit of observation may be to carry out research into specific organisational, structural fields or sectors and to look at subsocietal units rather than whole societies.
Where new studies are being carried out, it should, theoretically, be possible to replicate the research design and use the same concepts and parameters simultaneously in two or more countries on matched groups.
Whatever the method adopted, the researcher needs to remain alert to the dangers of cultural interference, to ensure that discrepancies are not forgotten or ignored and to be wary of using what may be a sampling bias as an explanatory factor. In interpreting the results, wherever possible, findings should be examined in relation to their wider societal context and with regard to the limitations of the original research parameters.
Why undertake cross-national comparisons?
Although the obstacles to successful cross-national comparisons may be considerable, so are the benefits:
- When researchers from different backgrounds are brought together on collaborative or cross-national projects, valuable personal contacts can be established, enabling them to capitalise on their experience and knowledge of different intellectual traditions and to compare and evaluate a variety of conceptual approaches.
- Comparisons can lead to fresh, exciting insights and a deeper understanding of issues that are of central concern in different countries. They can lead to the identification of gaps in knowledge and may point to possible directions that could be followed and about which the researcher may not previously have been aware. They may also help to sharpen the focus of analysis of the subject under study by suggesting new perspectives.
- Cross-national projects give researchers a means of confronting findings in an attempt to identify and illuminate similarities and differences, not only in the observed characteristics of particular institutions, systems or practices, but also in the search for possible explanations in terms of national likeness and unlikeness. Cross-national comparativists are forced to attempt to adopt a different cultural perspective, to learn to understand the thought processes of another culture and to see it from the native’s viewpoint, while also reconsidering their own country from the perspective of a skilled, external observer.
References and further reading
Castles, F. (ed.) (1993) Families of Nations: Patterns of Public Policy in Western Democracies, Aldershot: Dartmouth.
Ditch, J., Barnes, H., Bradshaw, J., Commaille, J. and Eardley, T. (1996) A Synthesis of National Family Policies 1994, York: Social Research Unit.
Hantrais, L. and Letablier, M-T. (1996) Families and Family Policies in Europe, London/New York: Longman.
Hantrais, L. and Mangen, S. Cross-National Research Methods in the Social Sciences, London/New York: Pinter.
Heidenheimer, A., Heclo, H. and Adams, C. (1990) Comparative Public Policy, 3rd edn, New York: St Martin’s Press.
Johnson, J.D. and Tuttle, F. (1989) Problems in Intercultural Research, Newbury Park: Sage.
Jones, C. (ed.) (1985) Patterns of Social Policy: an Introduction to Comparative Analysis, London: Tavistock.
Kohn, M.L. (ed.) (1989) Cross-National Research in Sociology, Newbury Park: Sage.
Maurice, M., Sellier, F. and Silvestre, J-J. (1986) The Social Foundations of Industrial Power, Cambridge, Mass: MIT Press.
Øyen, E. (ed.) (1990) Comparative Methodology: Theory and Practice in International Social Research, London: Sage.
Ragin, C. (1991) Issues and Alternatives in Comparative Social Research, Leiden: Brill.
Smelser, N. (1976) Comparative Methods in the Social Sciences, Englewood Cliffs, NJ.: Prentice Hall.
The Cross-National Research Group
The Cross-National Research Group was established in 1985 with the aim of providing a forum for discussion and exchange of ideas and experience between researchers from different social science disciplines engaged in cross-national comparative studies, for those planning to embark on cross-national projects and for policy-makers interested in exploiting the findings from international studies.
The Group has organised four series of seminars in cross-national research methods:
- Doing Cross-National Research;
- The Implications of 1992 for Social Policy;
- Concepts and Contexts in International Comparisons;
- Concepts and Contexts in International Comparisons of Family Policies in Europe;
- Methodological Approaches to International Comparisons.
The contributions to the seminars are published as Cross-National Research Papers and in an edited collection (Hantrais and Mangen, 1996).