Research Design and Rationale
This study employs an explanatory sequential mixed-methods design, administering a structured survey first and then conducting focus group interviews to explain and elaborate on quantitative patterns (Creswell & Plano Clark, 2017; Tashakkori & Teddlie, 2010). Integration is planned at design (connecting qualitative sampling to survey results), methods (building the focus group guide from quantitative findings), and interpretation (joint displays and meta-inferences) to enhance validity through triangulation and complementarity (Fetters, Curry, & Creswell, 2013).
Setting, Participants, and Sampling
- Population and frame: Educators (and, where appropriate, students) from a defined set of schools or programs. The sampling frame is derived from institutional rosters or partner districts.
- Survey sampling: Stratified random sampling by institution type and demographic strata to improve representativeness and precision (Fowler, 2014). Planned target N ≈ 400–600 respondents, which affords ≥.80 power to detect small-to-moderate effects (f2 ≈ .03–.05) in multivariable models, assuming α = .05 (Cohen, 1988). Where clustering by school/classroom is non-negligible, design effects will inform the target N and multilevel models will be used (Raudenbush & Bryk, 2002).
- Focus groups: Purposeful maximum-variation sampling of survey respondents who consented to follow-up, selected to reflect key subgroups (e.g., role, experience, context) and contrasting quantitative profiles (e.g., high/low scores on focal constructs). Approximately 6–8 groups with 6–8 participants each are planned, with final numbers determined by thematic sufficiency and the diversity of perspectives necessary to explain survey results (Morgan, 1997; Krueger & Casey, 2015).
Instruments and Measures
- Survey: A composite instrument assessing targeted constructs (e.g., instructional practices, self-efficacy, implementation barriers) using established scales where possible; new items are developed following best practices in scale development and psychometric validation (DeVellis, 2017). Items use 5-point Likert-type formats, with balanced keying and clear anchors to reduce satisficing (Krosnick, 1991).
- Instrument development: Expert review for content validity; cognitive interviewing with 8–12 participants to refine item wording, comprehension, and response processes (Willis, 2005). A pilot test (n ≈ 50–80) will evaluate reliability and factor structure.
- Focus group protocol: A semi-structured guide linked to survey findings (e.g., areas of divergence, surprising regression results, or subgroup differences) and designed to elicit explanations, contextual factors, and implementation details (Krueger & Casey, 2015). Prompts are neutral, open-ended, and ordered from general to specific to minimize priming effects (Morgan, 1997).
Data Collection Procedures
- Survey administration: Online mode with mobile-optimized design; three to four tailored contacts (pre-notice, invitation, reminder sequences) to maximize response and reduce nonresponse bias (Dillman, Smyth, & Christian, 2014). Incentives are modest and ethically appropriate. Estimated completion time is ≤15 minutes to limit burden.
- Focus groups: 60–90 minutes each, in person or via secure video-conference. A trained moderator and an assistant conduct sessions using a standardized protocol; discussions are audio-recorded and professionally transcribed. Ground rules emphasize confidentiality while clarifying its limits in group settings (Krueger & Casey, 2015).
Quantitative Data Analysis
- Preparation: Data cleaning, screening for outliers, evaluation of missingness mechanisms. Missing data will be addressed via multiple imputation under MAR assumptions, with sensitivity analyses (Rubin, 1987; Schafer & Graham, 2002).
- Measurement: Internal consistency reliability (Cronbach’s alpha and McDonald’s omega where appropriate); confirmatory factor analysis (CFA) to assess construct validity and (if applicable) measurement invariance across key subgroups prior to between-group comparisons (DeVellis, 2017).
- Modeling: Descriptive statistics with 95% confidence intervals; multivariable regression or multilevel models if clustering is present (Raudenbush & Bryk, 2002). Model diagnostics will assess linearity, multicollinearity, and residual assumptions. Multiple comparisons will be controlled using false discovery rate procedures when applicable (Benjamini & Hochberg, 1995). Effect sizes (e.g., standardized betas) will accompany p-values (Cohen, 1988).
- Bias checks: Nonresponse bias will be assessed through frame–respondent comparisons when auxiliary data are available and via late-responder analyses (Groves, 2006). Common method bias will be mitigated procedurally (assuring anonymity, psychologically separating measures, varied scale formats) and evaluated statistically (e.g., marker variable or latent method factor models) (Podsakoff et al., 2003).
Qualitative Data Analysis
- Approach: Reflexive thematic analysis conducted iteratively and systematically to identify explanatory mechanisms and contextual contingencies that account for quantitative patterns (Braun & Clarke, 2006).
- Coding: A codebook will be generated deductively from the research questions and key quantitative results and inductively from the data. Two trained coders will independently code an initial subset to refine the codebook; intercoder agreement will be examined (e.g., Cohen’s kappa) to inform training and code refinement, while final analysis emphasizes analytic rigor and transparency rather than kappa thresholds alone (Cohen, 1960; Braun & Clarke, 2006).
- Trustworthiness: Strategies include an audit trail, reflexive memos, peer debriefing, and targeted member checking of interpretive summaries where feasible (Lincoln & Guba, 1985). Reporting will follow relevant standards (e.g., COREQ) adapted to focus groups (Tong, Sainsbury, & Craig, 2007).
Mixed-Methods Integration
- Connecting: Quantitative results will inform qualitative sampling by selecting participants representing key subgroups or outlier patterns.
- Building: Focus group questions will probe unexpected or theory-relevant quantitative findings.
- Merging: Joint displays will align quantitative estimates with qualitative themes to generate meta-inferences, prioritizing convergence, complementarity, or explanation of divergence (Fetters et al., 2013; Creswell & Plano Clark, 2017).
Quality Assurance and Data Management
- Standardization: Detailed field manuals for survey administration and focus group facilitation; training and calibration of data collectors and moderators; pilot rehearsals.
- Instrument fidelity: Cognitive testing and pilot analyses; timing checks to identify satisficing; embedded attention checks with minimal disruption (Krosnick, 1991).
- Data handling: Secure storage, encryption, and role-based access; de-identification of transcripts; version-controlled analysis scripts and preregistered quantitative analysis plans to enhance transparency (where feasible).
- Ethical compliance: Institutional ethical approval; informed consent; right to withdraw without penalty; data minimization; confidentiality safeguards consistent with BERA guidelines (BERA, 2018).
Risks and Mitigation
- Low survey response rate: Tailored contact protocol, mixed-mode follow-up if needed, modest incentives, and short instrument (Dillman et al., 2014). Nonresponse weighting if auxiliary data allow (Groves, 2006).
- Scheduling and recruitment challenges for focus groups: Flexible scheduling (including virtual sessions), oversampling consenting respondents, and offering alternative small-group or paired interviews where necessary (Morgan, 1997).
- Technology failures (online survey platform or virtual focus groups): Redundant systems, pre-session tech checks, and contingency recording solutions.
- Social desirability and group conformity: Neutral phrasing, clear confidentiality norms, skilled moderation, and triangulation with survey findings to detect inconsistencies (Krueger & Casey, 2015).
- Data loss or confidentiality breach: Encrypted storage, regular backups, and minimal linking files kept separately; strict access controls.
- Unexpected contextual disruptions (e.g., policy changes, school closures): Use of remote modalities, extended data collection window, and adaptive scheduling.
Milestones and Timeline (12 months)
- Months 1–2: Ethics approval; finalize design; instrument drafting; expert review; cognitive interviews.
- Month 3: Pilot testing (survey and focus group guide); instrument revision; preregistration of quantitative analysis plan.
- Months 4–5: Main survey rollout; reminder protocol; begin preliminary cleaning.
- Months 6–7: Focus group recruitment and data collection; ongoing transcription.
- Months 7–8: Quantitative data cleaning, imputation, and measurement modeling; descriptive and preliminary inferential analyses.
- Months 8–9: Qualitative coding and thematic analysis; intercoder calibration; audit trail consolidation.
- Month 10: Integration via joint displays; development of meta-inferences.
- Month 11: Member checking of interpretive summaries; sensitivity analyses; finalize results.
- Month 12: Manuscript/report preparation and dissemination; archive de-identified data and analysis scripts as appropriate.
Limitations and Delimitations
- Focus groups limit individual confidentiality; results will be interpreted with this constraint acknowledged. Survey results may be subject to residual nonresponse or common method bias; procedural and statistical controls aim to minimize these threats.
References
- Benjamini, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society Series B, 57(1), 289–300.
- Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101.
- British Educational Research Association (BERA). (2018). Ethical guidelines for educational research (4th ed.).
- Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Routledge.
- Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37–46.
- Creswell, J. W., & Plano Clark, V. L. (2017). Designing and conducting mixed methods research (3rd ed.). SAGE.
- DeVellis, R. F. (2017). Scale development: Theory and applications (4th ed.). SAGE.
- Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed-mode surveys: The tailored design method (4th ed.). Wiley.
- Fetters, M. D., Curry, L. A., & Creswell, J. W. (2013). Achieving integration in mixed methods designs—Principles and practices. Health Services Research, 48(6 Pt 2), 2134–2156.
- Fowler, F. J. (2014). Survey research methods (5th ed.). SAGE.
- Groves, R. M. (2006). Nonresponse rates and nonresponse bias in household surveys. Public Opinion Quarterly, 70(5), 646–675.
- Krueger, R. A., & Casey, M. A. (2015). Focus groups: A practical guide for applied research (5th ed.). SAGE.
- Krosnick, J. A. (1991). Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology, 5(3), 213–236.
- Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. SAGE.
- Morgan, D. L. (1997). Focus groups as qualitative research (2nd ed.). SAGE.
- Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879–903.
- Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods (2nd ed.). SAGE.
- Rubin, D. B. (1987). Multiple imputation for nonresponse in surveys. Wiley.
- Schafer, J. L., & Graham, J. W. (2002). Missing data: Our view of the state of the art. Psychological Methods, 7(2), 147–177.
- Tong, A., Sainsbury, P., & Craig, J. (2007). Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care, 19(6), 349–357.
- Willis, G. B. (2005). Cognitive interviewing: A tool for improving questionnaire design. SAGE.