Proactively Addressing Accountability in Extension
Volume 6, No. 2, Spring 2001
John G. Richardson
Accountability requirements of government programs are ever increasing. As a part of government, Cooperative Extension is faced with identifying program impacts and proactively communicating them to appropriate audiences. Program value versus cost is a key component of accountability. Strategically planning for accountability to direct information on program value to targeted audiences helps to guide the process. Such planning should result in action steps to communicate program value and successes. A well-designed data collection system can provide the vehicle for collecting and accumulating program outcome and impact data that is readily accessible for marketing accountability information to appropriately targeted audiences. Relevancy of information marketed to the right people at the right time is most important for Extension accountability efforts.
Program relevance to constituents and society is now the public norm for questioning the value of both public and private organizations. Simply stated, government programs must have sufficient public benefits that make them worthy of continuing public financial support. To assure that program outcomes that demonstrate impact are adequately communicated to appropriate groups and individuals, a proactive accountability plan is needed.
Public benefit (or “people impacts”) is a key factor in program accomplishments. The people impacts may be indicated as financial gains, taxpayer savings, efficiencies gained, environmental enhancements or protection, individual life enhancements, resources preserved, or societal improvements (Bennett 1996). Increasingly program accountability must focus on assuring that targeted audiences are informed of “people impacts” plus other program successes as desired by a specific audience (Gale 1994; Sherman 1995).
This growing emphasis on accountability in the United States led to the passage of the Government Performance and Results Act (GPRA) in 1993. The intent of the law was primarily to
Improve Federal program effectiveness and public accountability by promoting a new focus on results, service quality, and customer satisfaction by systematically holding Federal agencies accountable for achieving program results (Government Performance and Results Act of 1993).
A continuing focus on accountability was shown by the passage of additional federal laws during 1998 to assure the involvement of stakeholders in making and reviewing programming decisions in agricultural research and extension (AREERA 1998). The guidelines for program planning and reporting from this latest federal act states that “Institutions should describe the contributions of extension staff and programs toward impacts rather than describe the programs.” (USDA 1999:10). In keeping with this trend, Extension must be more transparent and more proactively accountable.
The General Accounting Office of the U.S. government (GAO/GGD-98) describes cost-effectiveness analysis as a means to compare a program’s outputs or outcomes with the costs (resources expended) to produce them. Cost-effectiveness analysis assesses the cost of a program as compared to its outcomes or benefits. Often, priorities are set for allocation of limited resources based on a comparison of program costs and impacts. Such comparisons and decisions can be made through an ongoing program evaluation or through more formalized evaluations to compare value gained to the costs required for implementing the program. (Richardson and Phillips 1996).
O’Neill et al. (1999) explain that Extension leaders in Ireland are now required to show “value for money,” with policy makers demanding evidence that expenditures on extension’s services are cost-effective when compared to other uses of public funds. Both effectiveness and efficiency are taken into consideration.
Determining program economic benefit values
In valuing impacts or results of Extension programs, one or more of the factors listed below may be considered for calculating the economic value of the impacts. Therefore, value may equal
- willingness to pay
- economic opportunity cost of capital
- alternative uses
- expected values (projected use/income)
- multiplier effect
- reduced costs
- increased income
- debt reduction
- past trends
- how we are better off (less injury/sickness/death/medical costs/insurance premiums)
- increased productivity
- life — statistical value (potential life saved)
- non-market benefits
- indirect costs/values
(Bennett 1996; Swiss 1986; Rhodes 1985; Layard and Glaister 1994; Haveman and Margolis 1983).
Using only two of these values, the MONEY 2000™ program in New Jersey was deemed to have produced more than $3 million in benefits to clients through debt reduction and increased savings (O’Neill and Richardson 1999). MONEY 2000™ is an Extension program that encourages clientele to improve their financial well-being by increasing savings and/or reducing household debt (O’Neill et al. 2000).
Often, when values are ascribed to program results, many intangible factors that may influence the values may have been omitted in the process. While such intangibles may even be explained, the literature supports the strength of stated quantitative values (Campen 1986). Thus, while such stated values may contain omissions, a committee report of the U.S. House of Representatives states that “Whenever some quantification is done — no matter how speculative or limited — the number tends to get into the public domain and the qualifications tend to get forgotten . . . The number is the thing” (1980). Thus, when determining program outcome values, it is important to understand that other influencing factors may be omitted. However, placement of quantitative values on program impacts is generally seen as a viable means for presenting program worth.
Strategic planning for accountability
For any program marketing initiative to be effective, a clear and methodical plan of action must exist. Sporadic initiatives may perhaps satisfy some specific requirement for notifying some designated audience, but such initiatives can not be expected to have far-reaching effects. In order to assure that a continuous and sustained program marketing effort is planned and effectively implemented, each local unit should develop a strategic plan for its accountability efforts. The time involved in developing an accountability/marketing strategic plan can be expected to produce excellent results (Liles 1998).
Target audiences for accountability information
Unfortunately, it is well documented that “some Extension personnel like to hide,” or even with the knowledge of accountability needs and requirements, oftentimes, Extension workers feel that their efforts speak for themselves (Boyle 1999). Such attitudes are being challenged around the world, and many governments are weighing alternative approaches to Extension as a direct result of Extension’s failure to communicate program results and impacts. In today’s “results-oriented” world, Extension like any other service, must advertise its achievements and establish its worth (Paxton and Culverwell 1988).
There can be major differences in accountability requirements or needs depending on the audience. Some may want only limited or highly specific information in order to satisfy their requirements, while others may desire more extensive information. With the differing needs in mind, the Extension unit should address several key factors in order to develop and maintain a quality accountability system. These key factors can be listed as WHO, WHAT, WHEN, and HOW (Taylor-Powell 1989).
First, the WHO should clearly identify those primary and secondary audiences for receiving accountability information. Then, define WHAT information will be needed for each audience, WHEN the information will be given, and HOW the information will be formatted and presented. The key is to provide the right accountability information to the right people at the right time in the right format (NCCESTMTF 1998).
Taking action steps for program accountability and marketing
Numerous presentations and testimonials by lawmakers and other experts convey a familiar tone as they discuss Extension accountability and marketing program results. Some of these familiar thoughts follow (Boyle 1999), (Richardson 1999):
- Target program results to interested audiences.
- Give busy officials sound bites (very brief bits of focused information).
- Provide information on their terms, not ours.
- Provide evidence to support our claims.
- Provide the right information.
- Determine what they do know.
- Determine what they want to know.
- Decide what they need to know.
In dealing with policy makers (politicians) and those who provide analytical information to these policy makers, some suggested approaches follow (Potter 1999):
- Recognize that they are usually part-time.
- Understand that they are a cross section of the population (don’t stereotype).
- Understand that they are usually generalists.
- Understand that they look for duplication of services.
- Be in touch frequently.
- Develop friendships.
- Show them your programs.
- Explain programs and show constituency served.
- Follow-up frequently (keep short and concise).
For the politicians’ support staffs, Potter (1999) suggests that it is most helpful to get to know exactly what the support staff wants. He suggests that they are usually quite specific, that most are quite thorough, and that they really know what they can do in government. He suggests that they generally like to know numbers, and like to hear specific, in-depth information.
Using success stories
Based on evidence gained from many of the audiences identified by the North Carolina Cooperative Extension Service (NCCES) for receiving accountability information, many have stated preferences for concise success stories. They generally prefer stories that indicate practice adoption or changed behaviors by clients or positive impacts on clients. Such stories have generally been found to be highly popular for communicating Extension program impacts (Richardson 1999).
The following are comments reflecting the attitudes of some of the NCCES audiences for accountability information:
“We want brief, concise reports of accomplishments that cover the main points of Who’s involved; Problem; What you did; Difference it made; any collaborators; Contact person; and ‘on one page.” (Congressional Aide for U.S. Congresswoman Eva Clayton)”I want information that is really concise, tells me what is happening without wasting words, and that I can read very quickly.” (Member of N.C. House of Representatives)
Some counties are effectively using their success stories in preparing program marketing fact sheets for use in reports to the people, informing county officials, and for distributing information to the media and general public. The fact sheets are usually no more than one or two pages and have an attractive, professional appearance.
Meeting multiple accountability needs
Success stories alone will not meet all accountability needs. For example, NCCES must also provide to the federal government participation data such as numbers of face-to-face teaching contacts. Civil Rights information must also be provided in reports to the federal government. NCCES must also provide to the University of North Carolina System information on the numbers and types of non-degree credit activities conducted by Extension throughout the state. Other reports required at the local level may be the number of activities or events held. A single approach to program marketing and provision of accountability information to key audiences will likely fail. The key is to define the needs of the respective audiences and gain insights into the what they expect or want to know about Extension and its programs (NCCESTMTF 1998). Then with the audiences defined, a strategic, sustainable plan for providing expected information should be developed and implemented.
Using a data collection system
In 1995, North Carolina developed an Internet-based reporting system that allows for immediate interactivity throughout the entire NCCES network of 101 local units and the NC State University campus. This system collects all program contacts, accomplishments, success stories, civil rights information, and all non-degree credit activities conducted by NCCES. While the Extension Reporting System (ERS) provides the mechanism for accomplishment reporting and data collection, the information contained in ERS is only as valid or comprehensive as Extension staff provide. Rather than viewing accomplishment reporting as an unpleasant task, placing quality information into the ERS system should be seen as an opportunity to store and use this valuable information for local accountability needs as well as providing a showcase for Extension’s programs at the state and national levels. Many agents, while expressing their dislike for reporting program results, often express considerable satisfaction in being able to see and quantify their efforts. Such dichotomy of thought will likely remain, so serious efforts must continue to be made by Extension to communicate the value of the information reported for use in organizational accountability and program marketing.
Such serious efforts to secure an abundant supply of program impact information helps in meeting the challenge to gain and maintain credibility with the public as well as its many users. It is evident that across the world, “hard” questions are being asked as to the relevance of Extension and the cost effectiveness of its programs when compared to alternative programs or opportunities. Recognition of this growing, ever-present requirement for accountability and efficiency is a must. Communication of the impacts and positive results to key audiences is also a must. Developing and implementing a strategic plan for achieving these “musts” is critical. Questions related to the viability of Extension in the 21st century can be expected to continue (Campbell 1999).
In the course of meeting the accountability requirements for Cooperative Extension, this paper describes several supporting covenants for having a multifaceted, well-planned, and documented approach to gaining public support. Because elected policy makers depend on the public for their own support, efforts must be made to make the results of the organization congruent with its mission and public expectations. Upon achieving this congruency, efforts must be made to identify and market such program outcomes so as to continue to gain the support of the policy makers and their constituents, the public to whom they serve.
A proactively planned and implemented accountability system can assure that the program results of Extension are communicated adequately. Such a system can and does engender the support of those whom it serves. Such proactive accountability and marketing efforts have been shown to engender themselves positively with policy makers, thereby resulting in sustained or growing budgetary support. (Richardson et al. 2000).
While proactive and well-focused program accountability and marketing efforts have been shown to be productive for program support, Patton (1999) argues that the information and programs being discussed must have relevance to the intended audiences. From a political perspective, he states that “in reality, we can connect accountability leadership and evaluation to politics. However, this is usually not relevant when NO one cares, NO money is involved, and NO one has an interest in the program.”
Whether as an Extension organization or related Family and Consumer Sciences organization, it is the organization’s responsibility to make sure that people do care about its programs, that significant impacts are produced that show value, and that such results are effectively communicated to the right people in the right format at the right time. Expecting good works to speak for themselves is an idea that has long been invalid. Only through a planned, proactive approach to organizational accountability/marketing efforts can we realistically expect to continue to gain needed support for our programs and operations.
ARRERA. 1998. Agricultural Research, Extension, and Education Reform Act of 1998. Washington, D.C.: United States Government.
Bennett, Claude F. 1996. Rationale for public funding of agricultural extension programs. Journal of Agricultural and Food Information. 3(4).
Boyle, P. 1999. “Evaluation use . . . what do stakeholders want to know about our programs?Three stakeholder perspectives.” Panel conducted at a Providing Leadership for Program Evaluation Conference, University of Wisconsin Extension, Madison, Wisconsin.
Campbell, D.A.C. 1999. Managing public sector extension: some critical issues. 1999 Conference Proceedings, Association for International Agricultural and Extension Education, Port of Spain, Trinidad.
Campen, J. T. 1986. Benefit, cost, and beyond. Cambridge, Massachusetts: Ballinger Publishing Company
Few, P., and J. Vogt. 1997. Measuring the performance of local governments. Popular Government. Winter issue: 41-53. Chapel Hill, N.C.: North Carolina Institute of Government.
GAO. 1998. Performance measurement and evaluation — definitions and relationships. GAO/GGD-98-26. Washington, D.C.: GAO.
Gale, S. 1994. Performance measurement: public pressures and legislative mandates.USAID Evaluation News, 6(1): 2-8. U.S. Agency For International Development, Washington, D.C.
Haveman, R. H., and J. Margolis. (eds.). 1983. Public expenditure and policy analysis, 3rd ed. Boston, Mass.: Houghton Mifflin.
Layard, R., and S. Glaister. (Eds.). 1994. Cost-benefit analysis. 2nd ed. Cambridge, England: Cambridge University Press.
Liles, R. 1998. Strategic planning for accountability: A systems approach. Proceedings of 32nd Conference South African Society for Agricultural Extension. East London, South Africa.
North Carolina Cooperative Extension System Targeted Marketing Task Force (NCCESTMTF). 1998. Targeted marketing for accountability information. Task Force Report, Raleigh and Greensboro N.C.: NC State University and NC A&T State University.
O’Neill, B., and J. G. Richardson. 1999. Cost-benefit impact statements: a tool for extension accountability. Journal of Extension (On-line), 37(3):1-2. Available at http://www.joe.org/joe/1999august/tt3.html
O’Neill, S., A. Matthews, and A. Leavy. 1999. Measuring extension performance: a case study of the Irish agricultural extension service. 1999 Conference Proceedings, Association for International Agricultural and Extension Education, Port of Spain, Trinidad.
O’Neill, B., J. Xiao, B. Bristow, P. Brennan, and C. Kerbel. 2000. MONEY 2000™: Feedback from and Impact on participants. Journal of Extension (On-line), 38( 6). Available: http://www.joe.org/joe/2000december/ent.html#rb3
Patton, M. Q. 1999. “Perspectives of providing leadership for program evaluation.” Interactive video conducted at a Providing Leadership for Program Evaluation Conference, University of Wisconsin Extension, Madison, Wisconsin.
Potter, C. 1999. “Evaluation use . . . what do stakeholders want to know about our programs? Three stakeholder perspectives.” Panel conducted at a Providing Leadership for Program Evaluation Conference, University of Wisconsin Extension, Madison, Wisconsin.
Rhoads, S. E. 1985. The economist’s view of the world. Cambridge, England: Cambridge University Press.
Richardson, J. 1999. Developing and communicating effective program success stories for enhanced accountability. Journal of Applied Communications, 83(4):7-22.
Richardson, J. G. and R. E. Phillips. 1996. Developing cost and benefit estimates. Extension Education Process and Practice, SD-8, Raleigh, N.C.: North Carolina Cooperative Extension Service, N.C. State University, Raleigh, NC.
Richardson, J., J. Staton, K. Bateman, C. Hutcheson, G. Riddick, and R. Mustian. 2000. Political change and extension accountability, case studies via technology. Roundtable Discussion Paper, American Evaluation Association, Honolulu, Hawaii, November, 1-3, 2000.
Sherman, S., 1995. Assessment of the educational and informational needs of county governments in North Carolina (on-line). Available at http://www.ces.ncsu.edu/depts/aee/abstract.html
Swiss, J. 1986. Readings for PA 516. Course packet, summer semester, NC State University, Raleigh, NC.
Taylor-Powell, E. 1989. Analyzing and reporting results. Extension Publication:D-1373. College Station Texas: Texas Agricultural Extension Service.
United States Congress. 1980. Cost-benefit analysis: wonder tool or mirage? Report by the Subcommittee on Oversight and Investigations, 96th Congress. 2nd session, committee Print 96-IFC 62. Washington, D.C.: U. S. Government Printing Office.
United States Government. 1993. Government Performance and Results Act. Washington, D.C. (on-line). Available at http://www.whitehouse.gov/omb/mgmt-gpra/gplaw2m.html#h1
U.S. Department of Agriculture. 1999. Agricultural Research, Extension, and Education Reform Act of 1998. Washington, D.C. (on-line). Available at http://www.reeusda.gov/part/areera/
John G. Richardson, EdD., Extension Program Delivery and Accountability Leader, North Carolina Cooperative Extension, NC State University, firstname.lastname@example.org.
Cite this article:
Richardson, John. 2001. Proactively addressing accountability in Extension. The Forum for Family and Consumer Issues 6(2).
Back to table of contents ->https://www.theforumjournal.org/2017/09/04/spring-2001-vol-6-no-2/