by Eve Dicker, Marina Mcintire, Jona Maiorano, Eve West, Anna Witter-Merithew, Phyllis Wilcox
CIT/RID Ad Hoc Committee on Educational Standards

Rationale and Goals for the Endorsement Process | Overview of the Guidelines and Standards | Site Selection | Rater Selection | How the Application Works | Why Program Assessment?
This report represents the “state of the art” of the CIT/RID Program Assessment Package, at the time of the 1990 Convention. For a more current view, please refer to the RID Proceedings for the 1991 RID Convention. An historical perspective may be a bit over-blown for an organization as young as ours. But if we ignore this perspective now, later-when we wish to have the knowledge-we will have forgotten it. This is the appropriate time to take the opportunity to know what happened, and to put at least a preliminary interpretation on it.
It would be both easy and tempting to view the program assessment package as something imposed externally. Some may believe that it was developed by an elite group within CIT or by a mere handful of interpreter educators, who might “have it in” for smaller programs. It may seem to others that this project just appeared “out of thin air.” Nothing could be farther from the truth.
Throughout the brief history of CIT, program assessment has been at the forefront of our agenda, and literally dozens of CIT members have had a direct hand in shaping the present package. In 1979, at the first meeting of CIT, our founding mothers were prompt in identifying program standards as an issue which they should appropriately address. RID had already made a first pass at some program standards in the late 1970’s, but these did not generate any means for implementation. Anna Witter-Merithew was involved in that first effort. In 1980, in response to a request from Jan Kanda, the first CIT President, Anna developed a preliminary application, drawing from the RID documents. In 1981, Mary Stotler took on the project. Her committee members were: Theresa Smith, Judie Husted, Sharon Neumann Solow, Mark Hoshi, Doug Baynton, and Maddy Hartwell. The following year, Mary requested responses to her draft from all CIT members. Many of us can remember receiving the request. Many of us were not terribly clear about what it was all about, but we dutifully filled out a comment sheet and returned it.
In 1983, Cathy Cogen was made Chair of a joint CIT/RID Committee. This was the first time the two organizations had officially collaborated on anything. The Committee at that time included Susan Arneson, Rick Hernandez and Betty Colonomos. In the summer of 1986, while Marina McIntire was Vice-President of CIT, Jan – then President of CIT- asked her to become co-chair of the committee. At the same time, Dennis Cokely (then President of RID) appointed Gary Mowl as the second co-chair.
The Committee at that time included Susan, Rick, Betty, Gary, and Anna Witter-Merithew. McIntire took the response forms from Cathy and the committee made a preliminary revision of the package, incorporating some of our own ideas and comments. We then passed it around for critique to several people in the field, including Jenna Cassell, Bonnie Sherwood, Darlene Allen, Gary Sanderson, Caroline Preston, Bob Alcorn, Marty Barnum, Bonnie Dubienski, Don Renzulli, Eve West, Phyllis Wilcox, Marilyn Tousignant, and Theresa Smith. Some of these people were “free-lancers” and others were heads of well-established programs. Second and third revisions went back to those arbitrarily chosen folks and to the heads of each of the federally-funded programs: Lindsey Antle, Laurie Swabey, Carol Patrie, Bill Woodrick, Paula Sargent, Robert Baker, Bern Jones, Myra Taff-Watson, and Amos Sales. Comments and criticisms flowed back and Gary and Marina did their best to accommodate everyone.
At the 1986 Convention, the CIT Board haggled and hassled over the structure and nature of the package. Finally, we presented it to the CIT membership for approval, which was granted with some minor revisions. A few months later, the RID Board also approved the package. Lindsey Antle became Chair of the Committee, and it then included Anna, J an, Charlotte Baker-Shenk, Linda Siple, Jona Maiorano, and Eve Dicker. In time, Marina again accepted the responsibility of being the chair of this committee. Betty, Jan, and Anna did a littl~ shuffle and Charlotte started her family. Eve West stepped onto the’ Committee and here we are!
In the meantime, Jan Kanda and Charlotte Baker-Shenk had written a preliminary draft of a grant proposal to FIPSE, asking for funds to support a field-test of the package. They asked Marina to take a look at it, and the three  of them completed a first-round application in October 1988. FIPSE approved it in January 1989 for the second round of applications. The three of us buckled down once again, with some help from our friends, and we were successful in getting the funding which began in October 1989.
Next week [October 1990], we begin our second funding year. In the roughly nine years since CIT began this collaborative project with RID, it has been in the hands of at least one hundred and fifty various CIT members, each of whom has had the opportunity to shape it. Now we face the acid test, the actual piloting of the package. Five programs-two-year, four-year, urban, and rural-will take our “baby” in hand and we will find out whether this plethora of moms and dads have produced a viable package. The raters will meet in March of 1991, and the Joint Committee on Educational Standards will at the same time be collecting comments and criticisms from all participants in the process. That means a lot more people will critique and shape both the package and the process. In August 1991, in Washington, D.C., the CIT and RID Boards will deal with proposed revisions of the package and with the issue of funding. At that time, we may have a clearer answer as to whether and how the package will work for all of us. Our deep belief is that it will work and that it will work well.
Someone recently asked whether we could have done this more quickly if we had had funding at the beginning. After a moment, it became clear that it has been a real benefit to us to have struggled all these years. If funding had been available in 1981 or 1982, we would have pushed ourselves to meet deadlines.
We could not have allowed either the concept or the nature of the package to “seep” into the consciousness of the organization. As it stands now, each CIT member “owns” this package in a very real way. It is the result of a genuinely democratic process, in which we all participated in bringing this project to reality. We have something to be proud of, something that can now be used as a model for other organizations-both the package itself and the process of development.

Rationale and Goals for the Endorsement Process

At the present time there are approximately 63 programs that teach the process of interpretation between Sign Language and English in the United States. Generally they categorize themselves as “Sign Language Studies,” “Interpreter Training,” “Interpreter Preparation” or “Interpreter Education” programs. For the most part they have been established in post-secondary institutions throughout the country.
The number of programs has increased by leaps and bounds since the establishment of the first interpreting classes and programs some 25 years ago. With this increase came a tremendous diversity in the quality of instruction, program structure, overall goals and objectives as well as the resultant student experiences. For more than ten years, national leaders in the fields of interpretation and interpreter education have recognized a critical need to establish national standards and criteria in order to assess the quality of the interpreter preparation programs. The national endorsement system that is now addressing this need has been assembled by a joint committee of two national organizations representing interpreter educators and interpreters themselves, namely CIT and RID.
The proposal funded by FIPSE acknowledged the existence of such an assessment package containing the standards our field wanted to satisfy in the long run. The objectives of the proposal are: 1) to field test the assessment package; 2) to evaluate and modify the package; and 3) to activate the package and to make it financially self-sustaining. Further, it was proposed that a six-member committee be established and that these members, along with the Project Director, meet over a period of two years, train nine raters, supervise the field testing, evaluate the process and make revisions, and finally make recommendations to the CIT and RID Boards regarding the transition to a permanent endorsement process. This is no small task.
The funding of the proposal and the establishment of the process represents a major step for our profession. It will mean improved educational experiences for hundreds of students and interpretation services for Deaf Americans. The rationale of this package is sound. Any profession should be willing to put its training to the test. The standards are lofty but not unachievable. The package will help us as a profession to assess where we are and what we need to do in order to move into the future. Therefore, the primary goal of the package is a self-examination to help us assess our strengths and weaknesses.
In addition, we will be encouraging high standards for interpretation and transliteration and fostering program development. Another goal is the hope that, by talking together, programs will develop a mutual respect for each other and will gain a wider respect for themselves within the greater academic community. This process is a healthy one. It will provide an indication as to how effective the self-examination was and will contribute to the data collection which must ultimately advance the field of interpreter education.

Overview of the Guidelines and Standards of the CIT/RID Endorsement Package

This overview of the standards and guidelines found within the CIT/RID endorsement process focuses on selected items described within the four components of the process-institution, program, faculty, and curriculum. Rater forms are used to assign scores for each section of the application. Scores are assigned on the basis of comparing the application to the established standards.

A four-year program should award degrees on three levels: the bachelor’s, master’s, and doctorate. A two-year degree program will not be  evaluated using the same criteria as a four-year degree program. There are undeniable benefits to be recognized in offering an interpretation program through a baccalaureate-awarding institution. However, certain criteria are distinguished between a four-year and a two-year school, such as normal teaching load or admissions criteria. It is important for interpreting programs seeking endorsement to realize that many of the standards and criteria used to evaluate either type of institute are separate and distinct.
There are many opportunities for either type of institution to “shine” during the endorsement process. For example, any institution can provide a positive image in the form of ancillary student services, including job placement, counseling, academic remediation, student health services availability, and the like. Evidence of policies reflecting the rectification of past inequities towards minorities and women, or efforts to make all facilities and services available to minority and disabled students, are examples of positive evaluation criteria. An institution which can give evidence of recognizing the value of having both faculty and students professionally involved in local, state, national, and international affairs, would be seen in a favorable light, regardless of whether it housed a four-year or two-year degree program.
In accordance to the standards put forth in the CIT/RID endorsement package, an institution giving evidence of an enlightened view of the linguistic status of American Sign Language, perhaps by offering credit toward graduation, or in its acceptance as a foreign or second language, would be recognized as being in the forefront where both interpreter education and Deaf Community policies are concerned.

A program’s title is examined for evidence of preferred cultural identification with deafness, as opposed to a medical or pathological identification. The academic hierarchy should indicate that the program reports to an academic dean, instead of to handicapped services or support services. Secretarial help for the faculty should be in line with acceptable standards found in comparable non-interpreting programs. The name of the major, program degrees, and certificates of completion are evaluated under this program component to determine whether they reflect the nature and focus of the major.
Standard exit evaluations for competency in English, signed language, and interpretation or transliteration, show up in the evaluation of the program component. The number of students who enter, graduate, gain related jobs, and become certified are noted. Programs in states without
Quality Assurance evaluations are not penalized. Efforts to recruit and admit minority students, and a commitment in assisting them to graduate or successfully complete the program, are rewarded in this evaluative component.
A cohesive sequence of study and adequate practicum/internship experiences indicate a comprehensive and coherent set of learning experiences and provide the student with professional entry-level skills upon completion of the program. Sufficient contacts with members of the Deaf Community gives evidence of the program acting in tandem where mutual issues are concerned. Programs can offer proof of interaction through the involvement of students at political, social, and educational gatherings, both on and off campus.
Program support facilities in the form of secretarial and office support, photocopying, computer time, video and audio tape usage and dubbing, and library and laboratory facilities are considered essential to instruction enhancement. Because students’ views are valued, opinions of both matriculating students and graduates of the program are requested in the form of confidential questionnaires. Letters of support from the academic community and the Deaf Community are read and evaluated also. In addition, the extent to which a program depends on “hard” or “soft” money for program funding sources has an impact on the evaluation outcome.

Educational experience should have been garnered in relevant academic majors, with the faculty backgrounds varying according to a balanced set of views and academic biases. Faculty who attend professional conferences, conventions, and symposia are in a better position to exhibit their commitment to the profession. Publishing and presenting papers, leading and offering workshops, and sharing knowledge and skills related to the profession all help to earn evaluation points in addition to demonstrating to the national community that the faculty is knowledgeable and up-to-date. Points are accrued for faculty members who hold certificates from relevant organizations, such as RID and SIGN. Membership in CIT and other professional related organizations bear witness to maintaining an interest in current issues in the field.

For each core course, media assignments, testing procedures, and instructional strategies are assessed. A unit-by-unit description and schedule of the class, with the number of days spent on each unit, and a content summary, will also be solicited and evaluated. The coherency of the material and whether or not the courses follow appropriate prerequisites will be examined. Evaluations should be able to determine the students’ progress through the program, and the outlines should reflect an organized approach to the students’ overall experience and education.


Site Selection

How did the Committee go about selecting field-test sites? In November of 1989, the CIT/RID Educational Standards Committee met and established the following requirements for field-test sites:

  1. Sites selected should represent a sampling of programs offered by institutions which offer Associate, Baccalaureate or Master’s degrees. Guideline: should be proportionate to the total number of degree-granting programs in the same category.
  2. Program must have been established for a minimum of five academic years.
  3. Program should have a minimum of thirty graduates to date.
  4. Selected sites should represent both urban and rural locations. Guideline: target programs that will graduate individuals serving a predominantly rural or urban consumer base.
  5. Selected sites should represent a wide geographic distribution. Guideline: attempt to target at least one program from the East Coast, West Coast, Southeast and Midwest.
  6. Selected sites should serve deaf students on campus.
  7. Selected sites must have at least one staff member with full-time faculty status.
  8. Selected sites must have the ability to create a videotape of instructional activities which occur in the language lab and which address specific criteria.
  9. Selected sites should include one program with an interpretation focus and one program with a transliteration focus.
  10. Selected sites must submit five (5) letters of support from local organizations and agencies, college deans and administrators, faculty and students. Guideline: letters to reflect the cooperation between the program and the recommending entity, the commitment to support the endorsement process.
  11. Selected sites must have a representative at the CIT Convention in California during October, 1990, to participate in training related to the application process.
  12. Selected sites must agree that the data collected from the application process can be used to advance the field of education of interpretation.

The Committee followed the schedule below:
January 1990: Invitations and applications for field test site were sent to all programs listed in the CIT Directory, along with the criteria for selection.
March 15 1990: Deadline for field test site participation application. Fourteen programs responded.
June 4-6 1990: CIT/RID Educational Standards Committee selected the following five programs as the ones which best satisfied the criteria:
National Technical Institute for the Deaf (Rochester, NY)
University of Wisconsin (Milwaukee, WI)
Los Angeles Pierce College (CA)
Northcentral Technical College (Wausau, WI)
Tulsa Junior College (OK)

Rater Selection

In November 1989, the Committee met in Charlotte, NC, and established the qualifications of the review panel members, the application and the screening procedures and timelines. The qualifications we were looking for were:

In January, we sent letters of invitation to all CIT members, and all other known interpreter educators and institutions with programs. Also, we placed a rater selection announcement in RID Views. March 15 was the deadline for receiving rater applications. We received 35 applications and copies of all applications were distributed to a sub-committee to determine which ones satisfied the criteria. In April, we received from the sub-committee a list of applicants who were to receive a case study-the second part of the application process. In May, we sent case studies to the twenty-four eligible applicants to be returned by May 25. Eleven applicants finished the case study. In June, we selected the nine candidates who would receive training and become raters: Laurie Swabey (Minnesota), Cathy Cogan (Massachusetts), Pat Stawasz (Connecticut), Elizabeth Winston (District of Columbia), Dr. Nancy Frishberg (Connecticut), Chris Monikowski (New Mexico), Sally Koziar (Illinois), Dr. Sherman Wilcox (New Mexico), and Mary Mooney (Texas).
Additionally, we sent letters of invitation to 13 candidates who originally applied to be raters, asking them to complete the case study and submit it for review. These persons, if selected, would be extended an invitation to participate in the training at their own expense. There were no responses.

How the Application Works

The case studies represented three simulated interpreter education programs which have applied for endorsement. The three programs are Longview State University (a four-year program), Horizon Community College (a two-year program), and the Valleyview Junior College program (a two-year, private college).
[In the presentation, we focused on the philosophy statements, the sequence and types of coursework, faculty organization and qualifications, and discussed the ratings assigned by the rater candidates during training.]
The following is a listing of some of the considerations raters must address when rating individual applications:


Why Program Assessment? A Perspective on Benefits

It is common practice for educational programs to be expected, even required, to measure up to specific standards, especially programs of a specialized nature, i.e. nursing, early childhood education, human services. Following a similar practice will help to gain well-deserved recognition of the complex, academic rigor involved in interpreter preparation.
Embarking on the necessary self-analysis involved with completing the application offers an excellent barometer of what we are doing in our programs. It forces each program to take time to self-assess, look at itself, and introspect, which can only be positive. Recommendations from raters and the guidelines in the package could be used to lobby within your institutions for more resources, i.e. additional faculty, equipment, materials, space, smaller classes, recognition. Such efforts will appear less self-serving with documentation and feedback in hand.
The process of self-assessment needs the cooperation of all staff, faculty and administration. It creates an opportunity for the exchange of ideas, opinions and feelings about your program amongst those directly involved. Since all programs must plan for the future, the process of assessment forces us to look at where we have come from and to think about the future. With input from the rater responses, future planning takes shape more easily, in compliance with a national standard.
Programs completing the application will receive feedback from knowledgeable, trained professionals in our field. The feedback will indicate strengths as well as areas for improvement, provide suggestions for modification, lend guidance for future directions. Most academic institutions periodically require their programs and curricula to go through an audit process. During such an audit, a program must do many of the same kinds of introspective tasks asked by this endorsement package. For example, an academic audit three years ago of one IPP began with the development of critical programmatic questions which needed to be addressed. How much easier the writing of this document would have been had we completed an endorsement application first. Information gathered through the endorsement process will prove valuable when going through an audit process.
An end-product to be accomplished at some point will be the publication of a directory of IPP’s including information based on the endorsement process. Potential students could use such a directory to locate programs which meet their needs. Programs could likewise use this directory in their own recruitment efforts. Probably the most obvious and the least concrete advantage is the alignment with standard practices in our field today. Each of us feels ethically responsible to do all that we can do to provide our students the best education we can give. Complying with the guidelines and standards of the endorsement package assures us of meeting the commitment.