Using research in marriage and relationship education programming

Brian J. Higginbotham, Ph.D.
Assistant Professor and Family Life Extension Specialist
Department of Family, Consumer, and Human Development
Utah State University
2705 Old Main Hill, Logan, UT 84322.
Phone: 435-797-7276
Fax: 435-797-7220
BrianH@ext.usu.edu

Katie Henderson
Research Assistant
Department of Family, Consumer, and Human Development
Utah State University

Francesca Adler-Baeder, Ph.D., CFLE
Associate Professor and Extension Specialist
Department of Human Development and Family Studies
Auburn University
286 Spidle Hall
Auburn, AL  36849
334-844-3234
334-844-4515 fax
fadlerbaeder@auburn.edu

Abstract

Research and programming are interrelated. Good research leads to good programming and good programs can lead to good research. This article describes methods to incorporate research into marriage and relationship programming and ways to generate new research. Specifically, research informed programming and programmatic research are discussed. A process to develop and modify programs using existing research is highlighted as well as techniques to research existing marriage and relationship education programs.

Keywords: marriage, relationship, research, evaluation

Introduction

Family life educators have been encouraged to use existing research as the basis for what they offer programmatically (Hughes 1994). Educators have also been admonished to approach “prevention as a scientific enterprise as well as a service mission” (Dumka et al. 1995, 78). In light of these endorsements for research to be both the foundation and a goal of programmatic efforts, this article discusses the dual-role of research in marriage and relationship education programming. As depicted in Figure 1, research and programming are interrelated. Research can be used to both inform programmatic decisions (research informed programming) and to explain the outcome of programmatic efforts (programmatic research).

Figure 1. A model of the interrelated nature of research and programming

[D]

Programmatic research includes information gleaned from evaluative studies of existing programs. It details “if” a particular program works. It can also describe “why” and “for whom” the program is effective. This type of information should inform decisions about “which” program to offer, “how” and “where” the program should be offered, and “who” should be the target audience. Non-programmatic research includes empirical studies on factors related to relationship and marital quality and should inform “what” topics are taught in relationship education programs. Theories related to relationship development and adult learning also can inform program content and program implementation.

The research literature on marriage and relationship education programming continues to grow and evolve as more and more programs are implemented. For example, with the government’s recent funding of Healthy Marriage Demonstration grants, 126 programs with different curricula, implemented in different contexts, and targeting different populations are currently being executed and researched around the country (see http://www.acf.hhs.gov/healthymarriage). Each of these federally funded programs will report on what did and didn’t work. The lessons learned from these programs, as well as research from all the non-federally funded programs also underway, will in turn inform new and existing programs. Research informed programming and programmatic research are both critical components in the recursive process of developing, implementing, and refining successful relationship and marriage education programs.

Research informed programming

All facets of programming can and should be informed by research, including the decision of which curriculum to offer. There is a plethora of marriage and relationship education curricula in circulation, and a directory that includes most of these programs is available at http://www.NERMEN.org. After seeing the choices, one may ask, “How do I select a curriculum from all those available?” One legitimate, respected approach is to choose a curriculum based on empirical evaluations of program effectiveness. Trustworthy evidence of program effectiveness can be found in peer-reviewed academic journals. For recent reviews of curricula with demonstrated short-term and/or sustained positive program effects see Caroll and Doherty (2003) and Jakubowski et al. (2004). Unfortunately, well-known and well-researched curricula may not be within one’s budget.

It would be unfair to discredit or discount curricula that have not been researched. Many programs have not been empirically evaluated; yet, it is plausible that they are quite effective. The absence of documented programmatic effects may be due to the lack of funding to support evaluation research or the lack of evaluation expertise by those offering the program. However, when research is available, educators should be mindful of its relevance and significance. Programs that have not been updated for some period of time may be missing important information. The absence of program updates may indicate that program developers are not evaluating their program – or worse, not refining the program by incorporating alterations indicated by programmatic evaluations.

Another approach to program selection

In light of the considerations noted above, an alternative approach to comparing empirical programmatic evaluations is needed to guide decisions about program selection. One such alternative strategy involves comparing program content with findings from an appropriate empirical research base (Adler-Baeder, Higginbotham, and Lamke 2004). This approach is consistent with best practices in family life education and exemplifies what is meant by research informed programming (e.g., Hennon and Arcus 1993). Robert Hughes explained “
a well-grounded family life education program needs
a demonstrated research basis in regards to the topic, the content, and the application techniques” (1994, 75). In other words, when choosing an established program, it is important to verify that program content is still clearly supported by current literature. If a new program is developed, it is important to translate the extant research into program content. The extant literature refers to all the existing literature related to program goals. The process of identifying, reviewing, or translating all the relevant literature into its appropriate programmatic application may seem daunting. Systematically following a few steps can assist in this process (for a detailed description of this process see Adler-Baeder, Higginbotham, and Lamke 2004).

Step 1: Gather relevant literature. The first step is to determine and gather the relevant literature related to program goal(s). The overall program goal of education programs should dictate the research topic area to be investigated. Since the goals of marriage and relationship education are centered on the improvement and or enhancement of marital quality (e.g., Parke and Ooms 2002), a review of literature should center on factors related to marital quality. There are a number of electronic search engines, such as EBSCO and PsychINFO, which will generate a compilation of the literature associated with specified key words such as “marital,” “satisfaction,” “relationship,” and “quality.”

Step 2: Narrow the potential studies for review. Step 2 involves narrowing the identified articles to a smaller subset. The narrowing process should be guided by a clear and defensible rubric. In the case of general marriage education programming, articles should be (a) empirical, (b) peer-reviewed, (c) published during the past 10-15 years, and (d) use adults for their sample. A rationale for this is that juried articles have undergone scrutiny of methods and interpretation(s), and they are likely to represent the most rigorous basis for guiding applied efforts. Studies published more recently are most likely to include data that related to the current generation of couples. In addition, marriage education targets adults; therefore, findings from studies of adult samples provide the necessary research base.

Additional narrowing should involve focusing on articles that assess interactional variables. In marriage and relationship education programs, family and couple interactional processes, not family structure, should be the center of programmatic attention. Interactional variables, such as spending time with one’s partner, are factors that are considered changeable or modifiable (Karney and Bradbury 1995) and are considered to be the most appropriate targets for educational prevention and intervention work (Halford 2004). For example, negative processes, such as criticizing one’s spouse, can be addressed through educational programming with the intention of reversing or avoiding them.

Step 3: Review research rigor. It is also important to check the rigor of the articles that may be used to inform programmatic decisions. There are no clear guidelines on what constitutes “rigorous research;” however, four criteria may assist in this process. Educators can have the most confidence in studies that include(a) longitudinal designs, (b) representative samples, (c) observational methods, and or (d) multi-method or multi-informant procedures. These types of studies are generally of higher quality than studies that are not characterized by these methodological features.

As compared to cross-sectional studies, longitudinal research provides more reliable information on directional effects and causal determinants of marriage quality and or satisfaction (Karney and Bradbury 1995). Thus, longitudinal findings provide the best support for anticipated desired program impact. A representative sample offers more opportunities to generalize findings for a broader array of program participants. Observational methods of data collection generally are considered to have greater validity than reports from a single informant. If self-report or survey data collection methods are used, rigor can be established through use of multiple methods and multiple informants (Babbie 2001).

Step 4: Identify research themes.  Step 4 of linking research to practice involves identifying themes in the relevant and rigorous literature. After separating out and reviewing all the appropriate articles on couple interactional processes, Adler-Baeder et al. (2004) identified three broad categories of empirical findings: positive emotions and behaviors (Positivity), negative emotions and behaviors (Negativity), and cognitions. Table 1 summarizes the list of research-supported topics within each category. This list can be used to examine curricula that educators are currently using, or may consider adopting, to determine how inclusive the curriculum is of these topics.

Table 1. Research-supported themes and subcategories of marriage education content

Positivity:
Protective Factors
Negativity:
Risk Factors
Cognitions:
Protective Factors
  • Positive emotions
  • Affectionate behaviors
  • Supportive behaviors
  • Time together
  • Relational identity
  • Expressivity and self-disclosure
  • Negative emotions
  • Overt negative behaviors
  • Withdrawing, nonresponsive, or dismissive behaviors
  • Demand-withdraw pattern
  • Realistic beliefs and perception of expectations met
  • Knowledge and understanding
  • Consensus
  • Perceived equity/fairness
  • Positive attributions and biases

Source: Adler-Baeder, Higginbotham, and Lamke 2004

The four steps detailed above can be applied to other aspects of marriage and relationship education programming. Curriculum choice is only one of the decisions that must be made, and it should not be the only research-based decision. When doing marriage and relationship education, the extant literature should also inform a host of implementation decisions. By identifying and categorizing appropriate research and then reviewing whether a program is consistent with research-supported themes, one can have greater confidence that the program will have the desired effect. When the content and implementation design of educational programs are consistent with the relevant bodies of literature, educators should theoretically provide participants with an effective learning experience (Hennon and Arcus 1993).

Programmatic research

By definition, research informed programming relies heavily on programmatic research. Without programmatic research, educators are often left to make decisions based on theoretical assumptions or best guesses. Educators currently offering programs can greatly contribute to the field by doing programmatic research.  Sharing results and lessons learned can guide future programmatic efforts. Although programmatic research does take time and money, there is likely some sort of research that every organization can undertake. Recognizing that each organization is different in terms of scope, budget, and evaluation expertise, Jacobs (1988) has outlined a five-tiered approach to evaluation. Although the levels differ in terms of the type and scale of research activities, all levels share common assumptions about the role and value of program evaluation. These assumptions include the following (Jacobs 1988, 49):

  • “Evaluation should be viewed as the systematic collection and analysis of program-related data that can be used to understand how a program delivers services and/or what the consequences of its services are for participants.” Consequently, evaluation is both descriptive and “judgmental.”
  • “Evaluation is a necessary component to every program, regardless of its size, age, and orientation.” All programs should engage in some sort of evaluation, if for no other reason than to improve their own effectiveness.
  • “There are numerous legitimate purposes for evaluation. Programs must be committed to providing an effective service, but not all evaluations should attempt to determine program impact per se.”
  • “There are also many legitimate audiences for an evaluation.” The intended audience of the evaluation should impact the evaluation design.
  • “Evaluation activities should not detract from service delivery.”

Five-tiered approach to evaluation

Each level of Jacobs’ five-tiered approach to evaluation demands greater efforts, increased precision in program definition, and a larger commitment to the evaluation process. Programs can engage in several levels of evaluation simultaneously. It is also important to note that one level of evaluation is not better than another. All aspects of evaluation have inherent value and can contribute to the refinement of individual programs and to the field as a whole.

Level one: The Pre-implementation tier
The first level of Jacobs’ five-tier framework is the Pre-implementation tier. Activities in this tier include needs assessments, determining the fit between the community and the program, detailing program objectives, and establishing the basis on which the curriculum was developed. The activities in this tier provide the foundation for the credibility of the program and all subsequent evaluation efforts. The process highlighted earlier in this article – evaluating curricula against the standard of the extant literature – is an example of an evaluation activity in the Pre-implementation tier, and it can support the appropriateness of the topics included in a chosen curriculum. In this tier of evaluation, “the minimum expectation would be that program developers show evidence that the program was developed through a process in which the needs of a particular audience were considered” (Hughes 1994, 77).

All organizations should go through this level of evaluation before offering a marriage or relationship education program. Agencies that don’t will often learn this lesson the hard way. The author knows one agency that paid handsomely for a large number of facilitators to be trained in a well-known curriculum. The facilitators were then responsible for offering marriage education programs in their respective counties. To their surprise and dismay, couples did not come swarming to the workshops. This agency learned that just because their funding source believed in the merits of marriage and relationship education did not mean that the targeted audience would see the value of the program or that they would be willing to take the time to attend the workshops. In addition to providing programs that we feel couples need, it is essential to provide programs that couples want. Because every community and target audience is different, it is important that potential participants be asked what it is they want and what format they want it in. Participant attendance is most likely to increase if a needs assessment is performed first and incorporated into the program design. This can be done by holding focus groups with potential participants. (See Lengua et al. 1992.) It is likely that at some point, information about the relevance of, and need for, the program will be requested. Therefore it is advantageous to have this information readily available. Done well, evaluations at this level provide the foundation and baseline for the broader range of future evaluation activities (Jacobs 1988).

Level two: The Accountability tier
The Accountability tier involves the documentation and systematic collection of client-specific and service-utilization data. It is called the Accountability tier because reporting to funders and other interested parties is almost always expected, if not required. At a minimum, programs should be able to report that in a specified period of time X couples were provided Y services at a cost of Z. Examples of ways to do this include keeping track of the number of couples registered for your classes, the number who attend, and their demographic characteristics. To document these details, one may track the number of sessions offered, amount of time per session, and other aspects of the workshop format. For those familiar with the logic model approach to program development and evaluation, this is consistent with what is referred to as outputs.

Although it may be assumed that programs regularly collect this type of data, research indicates that relatively few actually do. In one national program study, more than 20 percent of programs kept no data at all, and among those who do, there was a wide variety of data collection methods (Hite 1985). If data collection is sporadic or unsystematically gathered, programs may have difficulty reporting the numbers of people they serve, whom they have reached, how staff spend their time, etc.

Tier two evaluations do not require the documentation of outcomes. To quote Jacobs, “second tier evaluation simply documents what exists – client characteristics, service/intervention descriptions and costs – and it may be the correct place to stop to allow newly organized programs to ‘catch their breath’” (Jacobs 1988, 56). It is important to keep this accountability data and to make sure it is frequently updated. This information will be useful in grant applications or requests for increased funding.

Level three: The Program Clarification tier 
The third level of evaluation includes the clarification of information gathered, with the opportunity for feedback and improvements to the program. Jacobs explains,


often this is the most useful genre of evaluation, with many data collection and analysis options open to younger, low-budget programs. At this level, program staff relies primarily on their own ‘collective wisdom’ to answer the question of ‘how can we do a better job serving our clients …This information often can be put to immediate use, and evaluation here remains close to the program, reflecting the ever changing beliefs and behaviors of the real people who work there and participate in it.’ (Jacobs 1988, 57-59)

Data is put to use at this stage. For example, an educator may notice from Tier two data that a program is attracting couples in first marriages, but cohabitating and remarried couples are not attending. This is the time to ask, “Why might this be case? Have we clearly identified our target audience? Is this the group we want to be attracting? What aren’t we doing that might possible attract the couples we intended to serve?”

At this point, pondering a quote attributed to Albert Einstein may be helpful: “Insanity is doing the same thing over and over and expecting different results.” If the expected results are not being achieved, altering methodology or at least clarifying program goals may be necessary. Based on further analysis of Tier two data, adjustments should be made to ensure that objectives are realistic and that the implementation design is conducive to the achievement of those objectives. Educators and program administrators should be able to examine the programmatic content, instructional processes, and procedures to determine what is working and what is not. This, of course, requires program staff to work cooperatively. Staff that work on different parts of the program or with different audiences may have different but insightful viewpoints on what is and is not working. Engaging in this “self-evaluation” is critical to improving the implementation and content of individual programs.

Level four: The Progress-Toward-Objective tier
At the fourth level of the evaluation, the focus turns to program effectiveness. Activities include progress toward short-term objectives, measuring client and staff satisfaction, and assessing for differential effects (i.e., does the program work better for couples of one particular cultural group?). This type of an evaluation is often undertaken with more established and financially secure programs. To document progress toward objectives, programs must have the time and resources to collect and analyze the necessary information. Often professional evaluators are hired, either from universities/colleges or the private sector, to assist in designing and implementing these types of evaluations. These evaluations may consist of several methods including pre-/post-test evaluation or standardized tests that assess variables that may explain differential impacts such as participants’ age, race, or gender. This level of evaluation increases knowledge about the effectiveness of the program and is usually expected when applying for large grants.

Level five: The Program-Impact tier
The fifth and final tier of evaluation pertains to documenting program impacts. This type of evaluation includes a rigorous experimental design to (a) assess the program’s effectiveness and (b) discern whether the positive results are attributable to chance or some other unaccounted variable. Random assignment and comparison groups are typically employed to identify and measure long- and short-term impacts. These evaluations typically require longitudinal designs and in the case of long-term impacts an organization may be looking at a multiyear effort. Although program-impact studies can certainly inform individual programs, usually these studies are “externally directed, meant to contribute more broadly to developmental theory and clinical or evaluation practice” (Jacobs 1988, 61). It is these types of studies that provide the most convincing data to policy makers. They demonstrate that outcomes did not occur by chance or by other controllable factors. Rather, results from these studies provide evidence of the utility and unique contributions of the program.

Evaluation resources

There are a number of on-line resources related to the evaluation of family life education.  Some are more general while others provide specific examples and resources for marriage and relationship education. Examples include

  • Child Trends’ compendium of measurement instruments.  This on-line resource contains a wide array of measurement instruments commonly used in the field of marriage research. Scoring guides are also provided. http://www.childtrends.org/_docdisp_page.cfm?LID=048E53A9-C1EB-4E66-8D598BFCA82D3B4B
  • Harvard Family Research Project’s evaluation periodical, The Evaluation Exchange, focuses on current issues facing program evaluators. Information is available for programs of all levels and articles are written by prominent evaluators in the field. www.gse.harvard.edu/hfrp/eval.html
  • National Healthy Marriage Resource Center has compiled a variety of resources related to marriage and relationship education programming. Resources include academic and government reports, fact sheets, and evaluation tools. www.healthymarriageinfo.org
  • University of Wisconsin Extension has a web site dedicated to program development and evaluation. Free resources on this site can guide you through logic models, program planning, and program evaluation. www.uwex.edu/ces/pdande/evaluation/

Visit the National Extension Relationship and Marriage Education Network (NERMEN) Web site at http://www.NERMEN.org for access to additional resources to support your program development and evaluation efforts.

Conclusion

This article has reviewed two important ways in which research can and should be used with relationship and marriage education programming. The first is to make sure that research supports the content and design of any program you may be using or developing. The use of extant literature to inform practice is a critical step in developing seamless connections between research and practice. Evaluating existing programs is also critical. Research informed programming is a recursive process that is fueled by new literature and evaluations. As we draw upon research to inform practice and concomitantly research our programs, we will enhance our efficacy and effectiveness in providing programs that truly enhance healthy relationships.

References

Adler-Baeder, F., B. Higginbotham, and L. Lamke. 2004. Putting empirical knowledge to work: Linking research and programming focused on marital quality. Family Relations 53:537-546.

Babbie, E.R. 2001. The Practice of Social Research (9th ed.). Belmont: Wadsworth.

Caroll, J.S., and W.J. Doherty. 2003. Evaluating the effectiveness of premarital prevention programs: A meta-analytic review of outcome research. Family Relations 52:105-119.

Dumka, L.E., M.W. Roosa, M.L. Michaels, and K. Suh. 1995. Using research and theory to develop prevention programs for high risk families. Family Relations 14:78-86.

Halford, W.K. 2004. The future of couple relationship education: Suggestions on how it can make a difference. Family Relations 53:559-566.

Hennon, C.B., and M. Arcus. 1993. Life-span family life education. In T. H. Brubaker (ed.), Family Relations: Challenges for the Future (181-210). Newbury Park, Calif.: Sage.

Hite, S.J. 1985. Family Support and Education Programs: Analysis of a National Sample. Cambridge, Mass.: Unpublished doctoral dissertation, Harvard Graduate School of Education.

Hughes, R.J. 1994. A framework for developing family life education programs. Family Relations 43:74-80.

Jacobs, F.H. 1988. The five-tiered approach to evaluation: Context and implementation. In H.B. Weiss and F.H. Jacobs (eds.) Evaluating Family Programs. New York: Aldine De Gruyter.

Jakubowski, S.F., E.P. Milne, H. Brunner, and R.B. Miller. 2004. A review of empirically supported marital enrichment programs. Family Relations 53:528-536.

Karney, B.R., and T.N. Bradbury. 1995. The longitudinal course of marital quality and stability: A review of theory, method, and research. Psychological Bulletin 118:3-34.

Lengua, L., M. Roosa, E. Schupak-Neuberg, M. Michaels, C. Berg, and L. Weschler. 1992. Using focus groups to guide the development of a parenting program for difficult-to-reach, high-risk families. Family Relations 41:163-168.

Parke, M., and T. Ooms. 2002. More than a dating service? State activities designed to strengthen and promote marriage. CLASP Policy Brief, Couples and Marriage Series 2:1-7. http://www.clasp.org/publications/Marriage_Brief2.pdf

Cite this article

Higginbotham, Brian J., Katie Henderson, and Francesca Adler-Baeder.  2007. Using research in marriage and relationship education programming.  The Forum for Family and Consumer Issues, 12 (1).

Online: http://ncsu.edu/ffci/publications/2007/v12-n1-2007-spring/index-v12-n1-may-2007.php

 

 

Back to table of contents ->https://www.theforumjournal.org/2017/09/03/spring-2007-vol-12-no-1/

Read Next Issue
Read Previous Issue