Below you'll find my final assignment for ECUR 809, which is an evaluation proposal for the non-profit organization called Sunrise of Life.
This was an interesting process. I began by creating the survey to the best of my ability, using the references provided in the module to carefully order and word each of the questions in a logical and simple manner. Using Google Forms, I organized the questions onto different pages, each with its own focus area.
In giving the form to my testers, with only providing very general guidance about the feedback that I was looking for, I got the impression that the order and organization must have been solid as the group focus almost entirely on specific wording and on making specific suggestion as opposed to giving broad feedback. There was little mention of the scales used or other specific organizational elements (like putting broad, overview questions before the demographics to increase engagement). I want to take this as a compliment to my efforts but I'm really not sure about that as I would still say I don't feel terribly confident in my ability to create a powerful and information rich survey. I have also attempted to engage a Sunrise of Life Board member but have yet to receive their feedback. Below you will find the link to my original survey, followed by the suggestions for improvement gathered during the process: Link to original survey: Sunrise of Life Survey - Attempt 1 Suggestions: First Page:
Sunrise of Life: Tupendane:
Tupendane-Education:
With these suggestions, the survey was altered to improve the specificity of the language used in the questions and statements. Once feedback is received from the Board of Directors member, I have no doubt more changes will need to be made but, out of respect for the due date of this assignment, revisions have been made to reflect the feedback gathered to this point. (Lesson learned: It takes longer to get feedback from individuals outside of a specific focus group). Also, I believe that the responses gathered from the focus group reflect a very Canadian perspective on the survey. Some of the changes made to the survey reflect this perspective and may have to be altered again when feedback from someone with more direct experience with the target population offers their suggestions. Another revision point may occur when the survey is translated into a different language as the meaning of the questions/statements may be altered by this process. Link to modified survey: Sunrise of Life Survey - Attempt 2 Expand this post to view the Logic Model for Sunrise of Life (ECUR 809 - Assignment 4) Below you will find the embedded document for Assignment 3: It took a long while to reach my conclusions about which model to apply to this program evaluation. In reviewing as much as I could about the various models presented in our first class, I settled into an internal debate about either applying Provus’ Discrepancy Model or Scriven’s Goal Free method. After repeatedly reviewing the program details, I have settled on using a concurrent application of Scriven’s Goal Based Evaluation (GBE) and Goal Free Evaluation (GFE) models in order to provide the program with as much clear and useful information as possible, especially as it appears from the provided summary that some of the goals were altered due to limitations in finding enough participants from the original target population. Goal Based Evaluation First, I believe a thorough GBE would benefit the program as the program developers clearly identified the rationale for and primary objectives of the program before it was implemented. These objectives would give the evaluator a solid basis from which to determine whether or not the program is, and to what level of effectiveness, able to support the needs of the originally identified target end-users. Another reason for conducting a GBE would be to explore the effectiveness of the various elements of the program that were used to support of the direct exercise program, such as the social elements and the provision of childcare during the sessions. While these elements were not explicitly defined as primary objectives of the program, the developers clearly felt they were necessary in order to build a sense of community and ensure all potential users were able to access the program. A GBE would assess the impact of such ‘added features’ against the intended outcomes of this exercise program and determine whether or not they should be continued in order for the program to be successful. Goal Free Evaluation As previously mentioned, the summary clearly outlines that the program developers had to reach out to a broader population of potential users due to limitations in the availability of participants from the specified target population. This would have inevitably had an impact on the success of the program when compared to the original objectives. To an outside evaluator, conducting a GFE, the differences between the original target users and those included to increase the number of participants would not be obvious without access to medical records. This would allow the individual conducting the evaluation to focus on the effect the program is actually having on all the participating individuals, providing information about program effectiveness from a new perspective. The evaluator could then provide conclusions and recommendations that reflect the current state of the program as opposed what was initially planned before implementation without being influenced by the initial objectives. By concurrently using both of these methods, the program developers would be provided with rich data, conclusions and recommendations about both their desired outcomes and those that arose when they were forced to alter their selection of participants. This multi-faceted information could then be applied to improve the program and determine whether the unintended outcomes and program aspects should be continued. An Introduction to Program EvaluationFor my first foray into program evaluation, I’ve chosen to review an evaluation conducted by the Social Program Evaluation Group (SPEG), which is based out of Queen’s University and Brock University. Titled Final Report: Evaluation of the Implementation of the Ontario Full-Day Early Learning Kindergarten Program (Vanderlee, Youmans, Peters, & Eastabrook, 2012), this report outlines the findings and recommendations of an in depth review of the first stage of the Full-Day Early Learning Kindergarten (FDELK) Program in Ontario. This implementation involved the addition of FDELK in close to 600 school across Ontario as precursor to a province wide adoption of the program. The full report can be found here: http://www.edu.gov.on.ca/kindergarten/FDELK_ReportFall2012.pdf Model Before beginning with the evaluation process, the team developed a logic model to guide their evaluation of the FDELK Program implementation. The model, with its clear visual representation of the objectives, actions and outcomes, sets out clear goals to define the primary processes used throughout the two year evaluation period. By generating the evaluation outcomes for immediate, intermediate and long-term impacts on the FDELK program, the evaluators give focus to their interpretation of the data and clearly identify their goal to support all stakeholders as the program evolves. The model also supports the collection of qualitative data from a 16 school case study and the interpretation of quantitative data provided by the Ontario government, all of which was used in drawing final conclusions and providing recommendations. Strengths Early Involvement I found it interesting that the evaluation team was brought in only one month after the first stage of FDELK program was implemented. The report gives the impression that the team was able to develop their methods in tandem with the program, offering opportunities for insights that could have been missed if the evaluation was conducted further into the implementation process. It also gives a positive impression of the commitment to the program by the Ontario government. Details, Details, Details Right from the first section, the reader can appreciate the level of detail that is reported throughout the final report. The authors clearly communicate specific and relevant information for each major section, including a dense section of quantitative data, and do not shy away from discussing the challenges that were faced during the evaluation process. It is clear the team did not rely on one method or source of information. This, in my opinion, gives the reader a sense of the evaluator’s credibility and accountability. Multiple Modes of Representation This final evaluation report presents the authors’ procedures, findings and recommendations in a variety of ways. Using text, charts, data tables and other representation tools, the document is accessible to variety of readers. As someone new to these reports, I felt that it did not take me long to become familiar with the document as there were many tools to support my understanding. Case Study Narratives While I was unsure about what I would find as I delved into this lengthy document (225 pages), I was not anticipating the three narratives which represent different levels of fidelity* and were created through the synthesis of the case study findings. The narratives act to clearly summarize the successes and challenges of various FDELK programs within the 16 schools included in the case study. This method of writing makes the findings accessible for a variety of readers and provide other educators with a familiar starting point for comparing their own implementation to the various success levels that were documented. They also serve to frame the recommendations found later in the report. * A term I had to look up to properly understand - I found the brief definition from Wikipedia to be the most approachable: http://en.wikipedia.org/wiki/Fidelity#Program_evaluation Successes, Challenges, Recommendations Found within the appendices, although referred to throughout the report, the authors have created detailed charts that outline the successes, challenges and specific recommendations found in different Program Areas as identified by various stakeholders. This section is rich with information and provides recommendations that are relevant to certain areas that impact the broader program. Weakness A Blend of Formal and Informal The final report seems to be a blend of highly accessible ‘informal’ writing (although at no point is the style truly casual) and highly formal reporting of data through the use of specific terminology. I found the sections containing quantitative data caused me to skim through to get back to a point that I was more comfortable reading. While I understand that statistical analysis and reporting requires certain details to be included, these sections impacted the overall readability of the document. A Lack of Diversity One of the first things that I noticed after my initial review of this document was the similarity between all of the schools within the case study. All were from relatively major cities and the students with the groups were similar in their lack of diversity. I could not help but wonder about the impact that including schools from smaller communities, or those with more diverse ethnic populations, would have had on the final findings and recommendations of the evaluation. With over 600 schools available for study, why not strive for more variety within the case study? Final Thoughts In all, the completed evaluation of the FDELK program implementation is rich with data, clear conclusions and specific recommendations. It reflects the obvious commitment of the evaluation team to provide a clear and detailed report without obvious bias. While dense at times, the authors do a good job of communicating the results of their evaluation in a variety of ways without shying away from the challenges that were faced by all stakeholders. An Aside As an aside, I think it would be interesting to see if a logic model was created during the initial development of the FDELK program as, according to McCawley (n.d.), it is a model that can be easily applied for both program planning and evaluation purposes. In looking for more information on the Logic Model, I found the outline produced by McCawley to be at the ‘just right’ level for my current understanding. You can find McCawley’s document here: http://www.uiweb.uidaho.edu/extension/LogicModel.pdf Fidelity. (n.d.). In Wikipedia. Retrieved from http://en.wikipedia.org/wiki/Fidelity#Program_evaluation McCawley, P. F. (n.d.). The logic model for program planning and evaluation. Retrieved from http://www.uiweb.uidaho.edu/extension/LogicModel.pdf Vanderlee, M., Youmans, S., Peters, R., & Eastabrook, J. (2012). Final report: Evaluation of the implementation of the ontario full-day early learning kindergarten program. Retrieved from http://www.edu.gov.on.ca/kindergarten/FDELK_ReportFall2012.pdf Welcome to the blog I have created for ECUR 809. Hopefully, by reading through my work, you learn something new or think about a topic from a different perspective.
Thanks for giving me your time and attention! Brian |
Archives
April 2014
Categories |