Volume 2 ~ November 2010

ISSN # 2150-5772 – This article is the intellectual property of the authors and CIT. If you wish to use this article in your teaching or in another format, please credit the authors and the CIT International Journal of Interpreter Education.

Daniel Roush,1
Eastern Kentucky University

Download PDF

Introduction

As part of the considerations of how educational technology may expedite intended educational outcomes, interpreter educators need to consider whether the tools chosen are maximally accessible and usable. According to the Center for Universal Design (CUD), universal design is “the design of products and environments to be usable by all people, to the greatest extent possible, without the need for adaptation or specialized design” (CUD, 1997). At first blush, universal design may not seem pertinent to interpreter education but rather a concern of engineers and architects. I will discuss the application of universal design (UD) principles to educational technologies that have been adopted for use in interpreter education.  In particular, I will focus on the design of video annotation software features used in the assessment of interpretations.  Based on data collected as part of a three-day online seminar held in May of 2009 related to technology use in assessing interpretations, I will argue that there are some technology features currently being used that appear to meet minimal standards of accessibility (i.e., the “adaptation or specialized design” mentioned above) but do not fully comply with principles of UD.  I will also report a pilot study of the development of prototype annotation features that would not only accommodate specialized needs for users who are deaf, but would actually be more usable by all levels of users.  As part of this study, I report preliminary survey and discussion forum data. I will focus on technologies used for American Sign Language (ASL) and English interpreter education in the United States. In keeping with the concept of UD, I will attempt to consider how the technology features may be useable in interpreter education in other language pairs, whether they are spoken or signed. The framework of UD and standards of accessibility will help interpreter educators answer the question of whether a given tool is maximally accessible and useable for its intended purpose.

Accessibility as a relative term

Since my focus is on the design of software technology used in interpreter education, I will center my discussion of accessibility within this area. In the United States, Section 508 of the Rehabilitation Act2 provides standards for technology used by employees of the federal government and/or members of the public accessing federal services. The Web Accessibility Initiative (WAI), part of the international World Wide Web Consortium (W3C), has developed the Web Content Accessibility Guidelines (W3C, 2008) and the Accessible Rich Internet Applications (W3C, 2009) that are used by Web designers and developers who wish to voluntarily create accessible Web sites and Web applications. Some organizations and educational institutions have internal Web accessibility policies that incorporate the WAI standards by reference. Both Section 508 and the WAI standards have considerable overlap in their criteria for developing accessible Web content.
The overall goal of these standards is to make technology features and content more accessible for people with disabilities. These standards have been established in an attempt to strike a balance between making features and content accessible for the maximum amount of users and, at the same time, not put an undue hardship on designers and developers. The risk inherent with establishing criteria is that people may make absolute statements based on minimal compliance. In other words, designers may claim that their software is accessible, not recognizing that this is relative term. Compliance with these standards does not constitute that the features and content are absolutely accessible for every user and for every application of the features.3

Alternative formats vs. language translation

One of the primary criteria within the accessibility standards is that all non-text content should have a text equivalent. For example, if an image is used, it should be tagged with descriptive text that can be processed by text-to-speech screen readers for users who are blind or visually impaired. For audio content, a text transcript or synchronized captions should be provided for users who are deaf or hard of hearing. By and large, the standards are concerned with providing accessibility from a modality perspective—since sound cannot be perceived by deaf individuals, the content should be provided in an alternative visual modality (i.e., text). Although visual text can be perceived by a person with vision, it does not necessarily mean it can be linguistically comprehended. Many deaf people in the United States can read English text quite well. However, for some deaf people, American Sign Language is their primary language and their preferred means of comprehending language-based content (e.g., consider the host of Internet video logs in ASL). Technology accessibility standards do not require translation of content into a signed language. To do so may be considered an undue burden. However, if software or Web content is designed and/or promoted for use by signing users, consideration should be given to provide more than text for language-based content. Principles of universal design, rather than accessibility standards, may be a better guide to ensuring greater usability of software features and content designed for interpreter education.

Universal design

When accessibility requirements, such as sidewalk curb cuts for people who use wheelchairs, are implemented there are often unintended positive consequences. Not only do curb cuts make sidewalks accessible to people in wheelchairs, but also make sidewalks much more usable for the person pulling luggage on wheels or pushing an infant in the stroller, or the elderly person who doesn’t have to lift his/her foot as high to step onto the sidewalk when crossing the street. Because certain designs benefit more than just people who are disabled, and because the population in the United States is aging, design professionals began considering maximum usability at the beginning of the design process, not as an afterthought or for minimal compliance with accessibility standards.
In 1997, a group of design professionals developed a set of principles to guide various disciplines in the design of environments, products, and communications. They developed the Principles of Universal Design 2.0 document.4 I present the seven principles here, along with a definition of each (Center for Universal Design, 1997).
     
PRINCIPLE ONE: Equitable Use
The design is useful and marketable to people with diverse abilities.
PRINCIPLE TWO: Flexibility in Use
The design accommodates a wide range of individual preferences and abilities.
PRINCIPLE THREE: Simple and Intuitive Use
Use of the design is easy to understand, regardless of the user’s experience, knowledge, language skills, or current concentration level.
PRINCIPLE FOUR: Perceptible Information
The design communicates necessary information effectively to the user, regardless of ambient conditions or the user’s sensory abilities.
PRINCIPLE FIVE: Tolerance for Error
The design minimizes hazards and the adverse consequences of accidental or unintended actions.
PRINCIPLE SIX: Low Physical Effort
The design can be used efficiently and comfortably and with a minimum of fatigue.
PRINCIPLE SEVEN: Size and Space for Approach and Use
Appropriate size and space is provided for approach, reach, manipulation, and use—regardless of user’s body size, posture, or mobility.
In the full document, each principle has a set of guidelines that may, or may not, apply to all designs.  Also, since the focus of this paper is on software technology, some principles, such as number six, Low Physical Effort, which is more related to computer hardware design, do not apply. I list these principles here as a brief introduction. In the following sections, I will attempt to apply various UD principles to certain current and emerging features in software programs used in interpreter education.

Universal design in software annotation features

English and other spoken languages share the ability to record the language using a standardized writing system. ASL does not have such a writing system. The writing system of spoken languages affords the ability to proliferate its own literature; it also serves as a tool to annotate and critique itself and is useful for educating interpreters who work between two spoken languages. Interpreter educators and students of spoken languages can conveniently use the writing system of the target and/or source language to capture the interpreting performance and provide specific and permanent teacher-, peer-, and self-assessment in either the target and/or source language. The collection of a student’s transcribed interpretations and written feedback in a portfolio forms the basis of assessment in spoken interpreter education (Sawyer, 2004).
With the advent of computer-based word processors and electronic texts, the ability to annotate and evaluate  text with text is essentially as easy as marking up a paper with red pen. Word processing software, such as Microsoft Word, contains features that allow for the sharing and tracking of comments between authors and reviewers; it could be used in interpreter education as well.  This commenting feature can be seen in Figure 1.

CommentFeature

Figure 1: Comment feature in Microsoft Word.

Electronic text and related callout graphics exploit the non-linear/non-instantaneous nature of the English writing system5 by allowing for annotations to precisely reference the text being critiqued. On the other hand, those rendering interpretations from English into ASL cannot enjoy the same benefits of having a writing system and the advances that electronic text offers (i.e., there is an inequality here, cf. UD Principle One). Additionally, although English text is visually accessible from a modality perspective, it is not linguistically accessible for deaf people whose native language is American Sign Language (i.e., there is inflexibility here, cf. UD Principle Two).
Despite popular belief, American Sign Language is not a code or “linguistic prosthesis”6 based on English. It cannot be adequately recorded by the English writing system (English glossing of ASL is discussed below).  Although there are several writing/notation systems7 that have been developed, in general, the de facto method of “writing” ASL is to capture it through video recordings.8 ASL-English interpreter education programs use video recording equipment to capture the English-to-ASL interpretation of students/mentees and attempt to use that for providing feedback. The difficulty in using video recordings as an evaluation tool is that there is essentially no convenient and practical means to provide feedback to students using ASL itself.
In particular, students or mentees often do not have the benefit of having comments provided in ASL that can be referenced to the precise moment in their language performance that is being critiqued. Teachable moments and connecting specific exemplars or errors with specific feedback is difficult. This is because video, as opposed to written text, is in a linear and instantaneous format (typically displayed between 15 and 30 frames per second). By its nature, video format is ideal for displaying a representation of the movement of ASL signing performance, but it does not have the same overall gestalt as text on a page or screen—where all the words on a page can be seen at once and any graphic hierarchy or annotations to the text can easily be found.
Based on data from an online survey of 150 interpreter teachers and mentors who participated in a three-day online seminar in May 2009 hosted by the National Interpreter Educator Center and entitled “Technology Tools for Assessing ASL-English Interpretations,” technology used in interpreter education often lacks features, which limit its utility in interpreter training and mentoring (more about this seminar and the survey is described below).  One of the questions on the survey is: “What technology do you primarily use to provide feedback to students’ video-recorded English to ASL interpretations or signing skills?”  Of the 150 interpreter teachers and mentors who responded, 33% selected “VHS/VCR technology.”  Forty-two percent (42%) selected “other” and specified the technology they used with a comment.  Nearly all these comments specified the use of digital video recorded on DVD, a local computer, or online (see Table 1 in the Appendix for a summary of responses and Table 2 for a list of comments).
The use of these technologies can be problematic.  Comments cannot be directly tied to the precise moment in the student’s performance on video. It is possible to use the time code on the VCR/DVD player or online video player to make a reference, but this may be cumbersome for both the teacher and the student (i.e., it is not simple and intuitive, cf. UD Principle Three).  Although there are some interpreter education programs (IEP) programs that have adopted software that was originally designed for feedback of athletic and artistic performance and allows for time-based text annotations of video, these programs can cost between $4,500 and $7,400 (US dollars) for a single license; less than 4% of the survey participants reported the use of these programs.  Some IEP programs utilize a free software program called ELAN (see http://www.lat-mpi.eu/tools/elan), which was designed for synchronized linguistic annotations of video data; less than 3% of the survey participants reported the use of this program.  These software programs do offer the ability to annotate video with text, but this does not address the issue of linguistic accessibility for the one giving feedback, and text does not provide feedback in a form that students could then easily model (again, cf. UD Principle Three).
For example, if an interpreter recorded an interpretation of a lengthy English discourse into ASL and interpreted the sentence “This morning an orange car smashed into my red car.” into ASL incorrectly, the mentor would first need to reference this part of the interpretation and attempt to provide an English text gloss of the ASL. Because ASL uses a grammatical classifier9 system to represent objects and spatial relationships, it is difficult to use English text characters to fully represent ASL classifiers. An English gloss of an equivalent10 way to interpret the sentence in ASL is provided in Figure 2.

gloss example

Figure 2: Text-based glossed transcription of an ASL sentence (Baker-Shenk, Cokely & Baker-Shenk, 1991:288).

This gloss attempts to capture not only the manual classifier on the right hand (i.e., representing the red car) but of a separate simultaneous classifier signed with the left hand (i.e., representing the orange car). The gloss also attempts to record the important simultaneous non-manual grammatical features of eye gaze and modifying facial expressions. As can be seen from this example, attempting to represent ASL with text is unduly complicated for both the transcriber and the one reading the transcription. It is more linguistically accessible for both the mentor/teacher and the mentee/student to have ASL annotations represented in a graphical, analog way such as seen in the static illustration in Figure 3, or better still, to use dynamic video format.

Car smash into other car

Figure 3: An illustration of the ASL classifier phrase “car smash into other car” (from Cokely & Baker-
Shenk, 1991:288).

 Methods and results of the pilot study: The development and evaluation of prototype annotation features

As mentioned above, in May 2009, the National Interpreter Education Center (NIEC) at Northeastern University, in Boston, Massachusetts (US), hosted a three-day online seminar entitled “Technology Tools for Assessing ASL-English Interpretations.”  An announcement for this seminar was sent to hundreds of e-mail addresses from the NIEC contact database.  The announcement targeted interpreter mentors and teachers who provide feedback to interpreting or ASL students.  The purpose of the seminar was to present and discuss current and potential uses of technology and/or to assess video-recorded interpretations.  Online registration for participation in the seminar was required.  As part of the registration process, participants were required to complete an online survey that included 39 questions/items.  The first question in the survey was, “Do you give feedback on signing and/or interpreting skills in your work?” Of the 150 responses, 100% selected “Yes.”  Participants were also asked to select their role/employment title from a list.  As participants were allowed to select any number of roles that applied, there is no straightforward breakdown of roles (see Table 3 in the Appendix for a summary of responses).
The online “Technology Tools” seminar used discussion forum software that was organized around several discussion topics. Discussion topics included open questions, such as what technology participants were currently using. Other topic areas provided materials for participants to review and post comments. Participants could navigate to any discussion topic at any time during the three-day seminar.  One of the discussion topics included a prototype mentoring environment for participants to test and discuss (see Figure 4 and www.interpreting.eku.edu/bigmac/demo/demo_fs.html). The purpose of developing this prototype and discussion topic was twofold.  The first purpose was to prove from a technology perspective that it was possible to develop online annotation features for video-recorded ASL that were more congruent with the principles of universal design than text annotation features.  The second purpose was to allow the participants to test the prototype software and provide evaluative comments.
prototype mentoring environment

Figure 4: A prototype mentoring environment that uses signlinking to add video annotations to an interpretation (see www.interpreting.eku.edu/bigmac/demo/demo_fs.html ).

The core technology for the prototype was based on exported signlinked Web pages created by a Web editing tool called SignLink Studio (SLS). Developed by the Centre for Learning Technologies at Ryerson University, The Canadian Hearing Society, and the University of Toronto, SLS is available to Web authors to create and implement accessible sign language-based Web pages (see www.signlinkstudio.com; Richards, Hibbard, Hardman, Woodcock, & Fels, 2008). It is a stand-alone program that must be downloaded and run on personal computers. The fundamental concept of SLS is the creation of hyperlinks within a video so that there is no need to use text-based linking for navigation.
Signlinking is conceptually equivalent to text hyperlinking on a Web page. However, although text hyperlinking identifies the space occupied by a string of text on a page that links to some other resource on the Web, signlinking identifies a time interval of video during which the signer refers to the resource (Richards et al., 2008). The prototype mentoring environment harnessed the basic concept of signlinking as a method to add video-based annotations to ASL interpretations, thus proving that a more universally designed approach to annotation features is technologically possible.
Other features of signlinked videos that conform to UD principles are two navigation aids. When multiple hyperlinks appear in a conventional text-based Web page, users can simply scan over them to gain an overall view, or a gestalt, of the distribution of links with respect to each other and to the rest of the page, enabling them to form a quick, intuitive understanding of the Web page’s role (e.g., the page is a content page or an index to other pages). In signlinking, this top-level view is achieved via the interaction of two navigation aids. The first is a signlink density display (Figure 4) that shows the location and relative length of all of the signlinks in the video, with the current link displayed in red (in the prototype mentoring environment, these links reference video comments about the interpretation). Clicking on a link lets the user discover the content of the link as it is played in the video area (in the prototype, this is the content in the interpretation that warranted a comment).
The second navigation aid is the signlink thumbnail images (Figure 4). These are arranged, three at a time, in a row below the video. The thumbnail images, one for each signlinked time interval, represent a frame captured from the respective intervals in the video. Each thumbnail image is given focus with red highlighting when the corresponding signlink occurs in the video. The static thumbnail images are not necessarily sufficient to unambiguously label what the signer is saying since movement is critical to sign language, but they are often enough to provide a hint or trigger recall for a returning user (Richards, et al., 2008).
Optional text features in the signlinked prototype also support UD principles. The first is an optional text label that was added below each thumbnail—this text label attempted to summarize in one word the topic of the linked video comment (sometimes using a gloss). The text label is also a hyperlink with the same URL as the signlink it is associated with. The second text feature is an optional text content area displayed to the right of the video. Within the prototype, this area provides an English transcription of the original source audio on which the interpretation is based, as well as instructions for the mentee. SLS also includes a captioning feature that was used in the prototype to provide a synchronized transcription of the source audio (Richards et al., 2008).
The prototype mentoring environment proved that it is technologically possible to apply signlinking as a method of adding video-based ASL annotations to interpretations.  Since UD is ultimately about human usability of designs, I was also interested in testing the prototype with interpreter educator participants of the seminar.  A link to the prototype was provided within a separate discussion topic area entitled “Signlink Demo: Prototype Interpreter Mentoring Environment.”  Participants were simply asked to explore the demo and discuss their thoughts and opinions within the respective discussion forum area.  Over the three-day period, 16 separate comments were posted by 11 different participants (I do not count my three posts in this number).  A thematic analysis of these 16 comments reveals four basic themes: favourable comments, questions regarding future development, technical support for viewing the prototype, and concerns with training and support (see all comments in Table 5 of the Appendix).
The qualitative data from the discussion comments that were part of the favourable theme corroborate with related quantitative data from the survey.  Participants were asked to rate on a five-point scale the importance of the ability to provide feedback in ASL using video comments rather than text only.  The majority (42%) selected the highest rating, “5, very important,” followed by 28% selecting “4,” followed by 17% selecting “3,” followed by 7% selecting “2” and 7% selecting “1, not important.”   It appears that this feature, which was included in the prototype, is an important one for the participants.  Another feature that participants were asked to rate, and was also included in the prototype, was the ability to provide feedback at the moment in the student’s video to which the feedback refers.  Participants were asked to rate the importance of this feature on the same five-point scale.  The majority (62%) selected “5, very important,” followed by 27% selecting “4,” 6% selecting “3,” 5% selecting “1, not important,” and 1% selecting “2” (see the Appendix for a table that summarizes participants’ ratings of six software features).  Based on both qualitative and quantitative data, it appears that the participants favoured the prototype software and highly valued its key features.
 

Limitations of the pilot and future directions

The results of this pilot study completed with interpreter educators are preliminary and warrant further testing of the usability with the mentee/student population. Another limitation to the pilot study is the fact that the prototype mentoring environment represents an end product that provides an interface for reviewing video annotations. It does not, however, represent the software tools required for adding annotations to video—which was accomplished with SignLink Studio (SLS). The current SLS offers a rich authoring environment for the user who has the desire and skill to create Web pages. Admittedly, users may view the use of SLS for assessing interpretations as time consuming and overly complicated. In other words, while SLS may be well designed for the user it was intended for (i.e., Web page authors), the innovation of using the current SLS design in interpreter education may not be universal enough—particularly when considering UD Principle Three: Simple and Intuitive Use. More work is required to develop tools that are universally designed with this new application of signlink technology and the end user in mind.
The prototype mentoring environment demonstrated that it is possible to use signlinking for adding comments to ASL recorded on video. Its design is universal enough to be used with any signed language (UD Principle Two: Flexibility in Use). It is also flexible enough to use the audio capabilities within video format to be able to add spoken language comments (with or without captions) to either spoken or signed interpretations. Other flexibilities with future designs could allow for mentees/students to assess and annotate her/his own interpretations and submit these to the mentor/teacher to assess his/her ability to do self- or peer-assessments. This technology could also be used to provide feedback to language performance (as opposed to an interpretation performance) in ASL language classes or any other language class.

Summary and conclusions

Although technology used in interpreter education may comply with modality-based accessibility standards, such as those found in the WAI or Section 508, this does not mean that features or content are accessible from a linguistic perspective. Universal design principles may be a better guide in assuring that technology used in interpreter education is maximally accessible and usable. An example of this can be seen in the prototype mentoring environment, which attempted to gain greater equity, flexibility, simplicity, and perceptibility in allowing users to review synchronized video annotations as an interpretation assessment tool. UD principles were used to assess the usability of both the prototype as an end product and the SignLink Studio software that was used to create the prototype.
As technology continues to develop, the interpreter educator should continually survey what tools are available and critically consider how these tools may expedite intended educational outcomes. Both accessibility guidelines, as well as the principles of universal design, can be used as frameworks to enable critical thinking about both the design and application of technology to interpreter education. These criteria might be used to think and dialogue critically about the use and application of general technology resources (e.g., online course platforms) provided by educational institutions in which interpreter education programs are situated; many technology support staff within institutions are familiar with these guidelines and principles. These frameworks could also be used to consider software programs that are specifically marketed to interpreter educators, and interpreter educators can either work with the designers of these programs to make improvements or independently design new alternatives (as has been done in the online prototype discussed above and with VideoLinkwell™ software—available at www.videolinkwell.com—that was programmed by an interpreter educator). Interpreter educators can use accessibility and usability criteria as they consider the use of online tools designed for anyone to publish Web content, such as the use of Blogger™ (www.blogger.com) that was used to create co-authored video logs shared between teachers and students (Roush & Coyer, 2007) or the potential use of annotation tools available on video sharing Websites such as YouTube and Viddler. A demonstration of these tools was made available during the “Technology Tools” online seminar and can be seen at http://www.interpreting.eku.edu/bigmac/demo/vid_annote_demo.html.
More work needs to be done in our field to specify our technology needs, collaborate with designers and developers, and agree on best practices for expediting intended student learning outcomes using technology. This work will no doubt require more technology training for interpreter educators and, ultimately, more money for the development, purchase, and administration of technology. Learning more about usability design helps us become more savvy as we pursue these means to attain our goals.  It will help the professionals in the field advance in the use of technology and, ultimately, improve the ability to educate interpreters.

Acknowledgements

I would like to thank Betsy Winston and Sarah Snow for their support in arranging the “Technology Tools” seminar and in developing the survey and for everyone who participated. Thanks also goes to Karen Petronio, Ward Henline, and Gay Woloschek for assisting me in the development of the prototype mentoring environment. I would also like to thank Lisa Bordone Roush, Karen Petronio, and the anonymous reviewers for their helpful comments in the writing of this article. I accept full responsibility for any errors.

References

Baker-Shenk, C., Cokely, D. & Baker-Shenk, D. (1991). American Sign Language: A teacher’s resource text on grammar and culture. Washington, DC: Gallaudet University Press.
Center for Universal Design. (1997). The principles of universal design, version 2.0. Raleigh, NC: North Carolina State University.
Richards, J., Hibbard, E., Hardman, J., Woodcock, K. & Fels, D. I. (2008, June 24). Technology and deaf education. Available at Signlinking 2.0 at http://www.rit.edu/ntid/vp/techsym/papers/2008/T11C.pdf.
(Accessed on 28 March 2010).
Roush, D. & Coyer, N. (2007). Co-authored ASL vlogs as a tool for student reflection and teacher assessment. DVD proceedings of the 2007American Sign Language Teacher Association (ASLTA) Conference,Tampa, FL.
Sawyer, D. (2004). Fundamental aspects of interpreter education: Curriculum and assessment. Amsterdam/Philadelphia: John Benjamins.
National Association for the Deaf [U.S.]. (1913/2003). Preservation of American Sign Language [Motion Picture]. United States: Sign Media, Inc.
W3C. (2008). Web content accessibility guidelines (WCAG) 2.0. Available from the World Wide Web Consortium (W3C) at http://www.w3.org/TR/WCAG20. (Accessed 28 March 2010).
W3C. (2009). World Wide Web Consortium (W3C). Available from Accessible Rich Internet Applications (WAI-ARIA) 1.0 at http://www.w3.org/TR/wai-aria. (Accessed 28 March 2010).

1 Correspondence to: daniel.roush@eku.edu
2The Rehabilitation Act of 1973 in the United States is part of civil rights legislation intended to prevent discrimination on the basis of disability.

3 Admittedly, the same could be true of something designed in compliance with the Principles of Universal Design—the phrase “universally designed” is also relative.

4 Copyright © 1997 North Carolina State University, the Center for Universal Design, compiled by advocates of universal design, listed in alphabetical order: Bettye Rose Connell, Mike Jones, Ron Mace, Jim Mueller, Abir Mullick, Elaine Ostroff, Jon Sanford, Ed Steinfeld, Molly Story, & Gregg Vanderheiden.

5 I use the term non-linear here in the computer science sense of random access contrasted with sequential access. In other words, a reader can access English script at any point on a page/screen without having to sequentially move through all the words from the beginning. Because of this, a reader has an overall sense of the graphical layout of the page/screen and can immediately skip to areas of the page/screen where annotations have been made. On the other hand, because video format is time-based, it is linear in nature and requires sequential access to locate specific parts.

6 I attribute the coining of this phrase to Harlan Lane (personal communication).

7 Among these systems are SignWriting, Hamburg Notation System, and Stokoe Notation.

8 In the early 1900s, the National Association of the Deaf in the United States recorded ASL on film as a way to preserve the language (NAD, 1913). Since the 1980s hundreds of ASL titles have been produced in video format. Recently, we have seen the advent of online scholarly journals published in ASL (see http://dsdj.gallaudet.edu).

9 I use the term “classifier” here; elsewhere in the linguistics literature, these signs are also referred to as polycomponential signs.

10 Equivalency in interpreting largely focuses on producing the equivalent intent and meaning from the source language to the target language. Therefore, since the focus of interpreting is on retaining the meaning and intent, it is often stated idiomatically and is not a word-for-word literal interpretation that attempts to retain the form of the source language. New interpreters often make errors in meaning equivalency when interpreting into their second language (e.g. ASL) because they may not know how to produce an utterance in an idiomatic way. They tend to fall back on a literal word-for-word interpretation, which often has no meaning, or a completely different meaning for speakers of the target language. Although students should be encouraged to develop self-assessment skills, this may not be realistic at an early stage in their education if they have not sufficiently developed native-like intuitions about their second language.