Evaluation Design and Methodology

The evaluation design used mixed methods and included three levels of analysis: 1) national-level innovation policy analysis; 2) regional-level collaborative framework focused on the RIICs and other government-industry-academe (GIA) linkages; and 3) individual-level analysis from HEIs and research and development institutes (RDIs) regarding STRIDE interventions and their effects on IE improvement. Table 1 shows the data sources for performance indicators, Table 2 contains data collection methods and the number of respondents per unit of analysis, and Table 3 describes the data processing tools and methodology.

The evaluation team collected quantitative data for two groups through an online Capacity to Innovate survey. Set A was for HEI scholars and grantees. A total of 70 (55 percent) responded to the online survey out of a population of 126 scholars and grantees. Set B consisted of participants in the focus group discussions (FGDs) for both the RIICs and the GIAs. A total of 22 out of 63 people (35 percent) responded to this survey. STRIDE provided the evaluation team with a list of HEI scholars and grantees. Annex B provides a list of all FGD participants. Results of Set A are found in Annex C, while tabular data for Set B are in Annex D.

Qualitative data came from 30 key informant interviews (KIIs) and nine FGDs, engaging a total of 63 participants for the latter. The four sample regions were the National Capital Region (NCR) and Regions 4-A, 7, and 10. The case study regions for the RIIC case study were Regions 11 and 3. The evaluation team conducted six KIIs in the case study regions. The national-level KII respondents were undersecretaries of the DOST, the DTI, assistant secretary of the National Economic and Development Authority (NEDA), president of PASUC, director general of Intellectual Property Office of the Philippines (IPOPHL), president of SEIPI (a private firm), and executive director of the CHED. The regional KII respondents, including case study regions, were regional directors of DOST, DTI, and NEDA. KII respondents from HEIs were presidents, chancellors, or vice presidents for R&D and other similar positions from the sample universities. FGD participants were business leaders who participate in the RIICs and GIA, technical personnel from academe and regional government offices, and local government elected officials. Annex B provides a list of all FGD participants; they are currently the major in the IE. The percentage of women respondents ranged from 43 percent at the national level to 53 percent at the industry level.

The evaluation team processed and analyzed the qualitative data it collected using mind map and NVivo software and conducted content analysis. Five analysts coded all the information gathered from KIIs and FGDs. To ensure consistency in coding the KII and FGD information, the evaluation team followed these steps: use of codebook; two cycles of coding for each analyst; estimating interrater reliability using the NVivo 9 software collaboration cloud; and validating the codes that were used and themes the entire group generated. The interrater reliability ranged from .80 to 1.00, which indicates reliable coding of the information (Annex I).

The evaluation team analyzed quantitative data using descriptive statistics and graphics. Since this was a performance evaluation study, the team looked for evidence rather than statistical significance. This is consistent with USAID guidance for monitoring and evaluation in the program cycle2. The evaluation team also used joint displays that combined qualitative and quantitative information into the same graphics for a mixed-methods approach to the analysis.