Friday, June 14, 2013

W6: Nine Ways to Reduce Cognitive Load in Multimedia Learning

Summary

In their article Nine Ways to Reduce Cognitive Load in Multimedia Learning, Richard Mayer and Roxana Moreno explore how we present information and how much we present at a time can affect learner's ability to absorb and process the new information.

Mayer and Moreno begin by defining multimedia learning ("learning from words and pictures") and multimedia instruction ("presenting words and pictures that are intended to foster learning") (p. 1). Furthermore, they explain that they are looking for not just learning but meaningful learning, where learners are able to demonstrate and understanding of the information by being able to apply it beyond the learning environment. (As opposed to learning it long enough to past a test, and then not really being able to do anything useful with it.)

Mayer and Moreno have three assumption about the brain and learning:

1. Dual assumption - people process information audibly/verbally and/or visually. A person can process the two things concurrently to one another, but not simultaneously (on the same channel in the mind).

2. Limited capacity assumption - while learning for most people is exponential, the amount of new information someone can meaningful process as a given time is limited.

3. Active procession assumption - it is a heavier cognitive load to process something visually (reading subtitles) and audibly (listening) than just processing one channel at a time.

It is based on these assumptions that they came up with the Cognitive theory of multimedia learning shown here.

Mayer and Moreno assert that you can encourage and increase meaningful learning by being mindful of the channels you require your students to use when designing your multimedia materials. They provide 5 case studies where the cognitive load was overloading learner capacity for meaningful learning. For each case, they offer one or multiple solutions for how to lighten the cognitive load by focusing on incorporating essential processing (the required elements for making the material understandable), weed out incidental processing (extra unnecessary features such as adding music, animated gifs, etc), and being mindful of representational holding (provide connect materials together when possible, to lesson the time the learner has to spend going back to find answers in your materials).

Summary
This was actually my favorite article of the class so far. All of the examples are realistic and the solutions are simple and reasonable. I think many of us could brainstorm and come up with the same or similar solutions in many of the cases, however, we didn't have names (effects) to describe why we would make the change. Also, I find that it is easier to review and find the cognitive overload problem areas in other people's design than in my own. I think creating a checklist of sorts of these effects as a cheat-sheet to double check my designs would be useful.

I will point out though that I don't feel these rules are absolutes when it comes to language learning through multimedia learning. Let me give an example:

In developing language learning materials, I will often see developers use a video and then add subtitles and call it a "listening exercise". If the subtitles are in the target language, I say "No, this is now a reading exercise, because your students aren't being forced to negotiate meaning through listening (a harder skill than reading) because they'll be reading." If the subtitles are in English, I say "No, this is now a cultural exercise, because your because your students aren't being forced to negotiate understanding through the target language." What I encourage instructors/developers to do (for creating a listening activity) is to not use subtitling (word for word) in either language, but instead to only caption (popping up a word or short phrase) at the bottom of the screen only for some words. Examples I give of relevant reasons to caption: 1. salient terms, 2. pointing out new grammar item, 3. emphasizing new vocabulary word, 4. help with meaning (in language learning we say that your goal is to present material that is i+1, which is *just* above their level of understanding, to push them to grow. However, if you go above that, then the learner won't learn because they become too frustrated and shut down.). In this case, we are using the multiple modality to draw attention to a new/important/difficult item in order to help the learner.

Monday, June 10, 2013

W5: Effective Web Instruction, Part III

The last two chapters of Frick and Boling's Effective Web Instruction are Chapter 5: Building and Web Prototype and Chapter 6: Assessing and maintaining the site.

Chapter 5 is split into four parts:

  1. Issues Regarding Current Web Technologies
  2. Further Limitations and Some Alternatives
  3. Types of Web Solutions, Depending on What You Need For Instruction
  4. Making Templates for Web Protypes.
My biggest criticism of this text is the obvious: It is so outdated! According to the footer, this text was last published on May 13, 2006, and 7 years is a long time to not update info on web technologies.

For example, it talks about students needing Netscape (is it even still in existence?), Internet Explorer (I know this is still around, but I don't know anyone who actually uses it), and AOL (did people still use AOL in 2006?). Today's browsers of choice are Mozilla Firefox, Safari, and Google Chrome.  It talks about students using Hotmail accounts (instead of Yahoo or gmail).

http://kitsindia.co.in/home.html
Important current teaching tools it doesn't mention: Dropbox (belongs where they discuss the limitations of sending documents to email), using wikis and blogs for students to collaborate, programs like Skype and Google chat (for free conferencing) or Adobe Breeze (costs money).

The main program for developing websites it talks about is Dreamweaver (which is what I use today), however, there are also a lot of programs out there for creating templates that are designed for creating interactive modules by people with less programming experience (like Adobe Captivate). These are WYSIWYG type programs that allow developers to add forms and interactivity in a way that most beginners can't do with programs like Dreamweaver.

Also, when discussing Flash in 2013, it's important to mention that flash isn't available on iPads (or other apple mobile devices), which would be a big reason to consider not using flash in development!

And that is the last thing I was going to mention: a chapter written in 2006 doesn't take into account mobile technology, specifically smart phones and tablets. Even the section on web templates talks about the familiar top nav bar and left nav bar popular on many sits, but those forms are actually not most conducive to designing sites that will be used on mobile technologies. This is something to consider if you are developing an instruction that will be accessed using mobile technologies.

(P.S. Frick mentions Ray Kurzweil, may I highly recommend the documentary on him Transcendent Man. It's on Netflix streaming.)

QUESTION: Can we use the same person to test the paper prototype and the web prototype?

The last chapter, Chapter 6, focuses on Assessing and Maintain the site. Reading this chapter reminds me of an old saying my dad always uses: "Measure twice. Cut once." Sure, most of the time he was literally talking about woodworking. However, I think this applies...

Frick and Boling say hey, if you think you are done testing, test one more time. And testing should include two parts: usability testing (measure 1 - testers running through the prototype) and a summative evaluation (measure 2), before going live (cut).

http://precisetestingsolution.com/betatesting.php
The main thing I learned in this chapter is that during the bug testing phases, it's best to break the team up into testers and fixers and only do one at a time. This actually makes a lot of sense, because I've been on plenty of group projects where we are all doing both. And a few times I've just told my team members after all the design was finished "just work through it and write down your problems and I'll fix them later." I was doing this to save time, because I didn't want people to get hung up trying to fix something when what we really needed to do was locate all of the problems first. Now I know it's a legitimate approach to bug testing. :)

I really like that in the checklist on page 108 that they mention the importance of the browser. When designing a web-based project, you have to check it on all browsers. I currently check all of mine on Safari, Firefox, Chrome, and Explorer. If there is one it works best with and/or one it is NOT compatible with, I add that note in the beginning so that the user is aware from the beginning. Also, as I mentioned about the flash issue before, if your instruction has flash, you should let the user know that it isn't compatible with some devices.

The description of bug testing explained here is definitely is more detailed and defined than anything I've used before and am interested in giving some of these strategies a try with testing this project.

The final section, Conducting the analysis, really is like a bookend to the beginning of the textbook, showing that in the analysis phase, you really need to make sure that the final product aligns with your initial goals. For example, making sure the audience, the stakeholders, the learning objectives, etc are met. And that it is appropriate to be taught using computer technology (although really isn't it a little late to determine it's not? :) )

Finally, I liked the idea of conducting and analyzing interviews of users (which I've never done) because you can see not only did the technology work?, but did the learners really understand the point of the instruction? which I can embarrassingly admit I have not done in the past.




Saturday, June 8, 2013

W5: What Makes e^3 Instruction?

Merrill's paper on What Makes e^3 (effective, efficient, engaging) Instruction? is especially interesting to me, because much like BYU Hawaii, our department is trying to develop an distance learning program for non-university student language learners* but we are very concerned without how to make the instruction effective, efficient, and engaging.

*IU students can take language courses for our languages through CEUS, but we are more interested in reaching the non-student learners, like military personnel, business men, aid-workers, government officials, etc. People who don't want to take semester long academic classes, but want to learn the language NOW for purposeful reasons.

In this article, Merrill explains that BYU Hawaii wanted to "(1) improve the quality of instruction and (2) reach more students by distance learning" (p. 1) and they worked to accomplish this by applying Merrill's first principles of instruction to their curriculum design. Specifically, they focused on utilizing problem-centered learning, incorporating more peer-interactions, and boosting technology-enhanced instruction.

Merrill explains that using problem-centered learning is important for e^3 because it boost learners past the associative memory phase (where one quickly forgets what they've learned if not given the opportunity to apply it in a timely manner) and into what he calls the mental model. The description of the mental model ("A problem-centered approach facilitates the adaption of an existing mental model or enables the learner to form a new mental model that integrates the various component skills into a meaningful whole." p. 1) reminds me of the constructivist learning theory, which hypothesizes that people learn through re-constructing their thoughts, ideas, understanding in their brain.

He explains that problem-centered instruction is a structured approach that involves guided teaching though demonstration, direct teaching, and allowing the students an opportunity to engage with problems in an increasingly difficult manner, while (it's not quite clear, but I'm assuming) slowly decreasing instructor support.

Peer interactivity is important for e^3 because it forces learners to test out their new mental models, not only in application, but also in peer review. This directly correlates to the methods of learning by teaching, where it is hypothesized that students actually absorb and acquire knowledge deeper when they are asked not only to demonstrate their knowledge, but also to teach it to their peers, cementing their understanding. BYU Hawaii seems to rely on Peer collaboration and Peer Critiques as their means of interactivity. Their justification for this means of peer interactivity is that it engages the student  by first applying the skills in their own solution, then working in a group to come up with a consensus solution, and finally, having to critique their peers based on their understanding of the problem and solutions, giving them several differentiated ways to interact with the material.

Using technology-enhanced instruction is key for providing the instruction to distance students, as well as supporting the engagement of the local learners. And Merrill gives a description of how the frame they created supports the instruction. However, I'll be honest that I'm still not quite sure how it all looks. I'd really like to see some screen captures and more concrete examples.

Overall, I thought this article was clear, simple, and direct. It gives me a lot to think about as I develop my own projects, because these are precisely the same ideas we are looking to incorporate our own online learning project development. As someone who learns by seeing and doing, this particular article would be awesome if turned into a learning module that actually incorporates the strategies being described!


Tuesday, June 4, 2013

Mastery Assessment

To be honest, I’ve had some difficult coming up with my Mastery Assessments as explained by Mager. My biggest confusion is how we are supposed to achieve Mager’s definition of assessments for SDEL environments. If this is a course where there is no instructor, than who is going to be measuring and evaluating the responses??

Obj 1: Without references, be able to name (write) the 5 Cs of Foreign Language Standards.
Obj 2:. Given 10 examples of textbook tasks/activities/exercises, be able to correctly identify (label) with 100% accuracy which of the 5 Cs of Foreign Language Standards is being exemplified.
Test Item: Read the following excerpts from a language textbook. For each excerpt, label the Foreign Language Standard that is being addressed. (If you believe there is more than one standard being addressed in an excerpt, label it with the one that is most obvious.)

Obj 3: Without references, be able to identify (label):
      a. a task.
      b. an activity
      b. an exercise.
Test item: Read the following excerpts from a language textbook. For each excerpt, label whether it is a task, activity, or exercise.

Obj 4: When provided with the ACTFL Proficiency Guideline*, be able to correctly identify (circle) three examples of Advanced level tasks/activities/exercises.
Test item: Read the following excerpts from a language textbook. Using the provided ACTFL Proficiency Guide, put a check next to the three excerpts that would be rated as Advanced Level Proficiency according to the scale.

Obj 5: When provided with the IRL Scale, be able to correctly identify (circle) three examples of Level 3 tasks/activities/exercises.
Test item: Read the following excerpts from a language textbook. Using the provided IRL Scale, put a check next to the three excerpts that would be rated as Level 3 Proficiency according to the scale.

Obj 6:  Without references, be able to explain (write) the differences between the purpose of the ACTFL Proficiency Guidelines and the IRL Scale.
Test item: Read the given scenarios, and write whether you would use the ACTFL Proficiency Guidelines or the IRL Scale. For each, write an explanation for your choice.

Obj 7:. Without references, be able to define (write) the Communicative Language Teaching Approach and list (write) the five features.
Test item: Imagine you have received the following email from a publisher asking for more grammar lessons in the textbook draft you submitted. Reply to the email by stating that your textbook uses the Communicative Language Teaching Approach, and provide a quick yet thorough explanation of CLT, including its five features.

Obj 8: Provided 5 themes and 5 topics, be able to discriminate (label) between a topic and a theme.
Test item: The themes and topics have been mixed up. Read the 10 items given and drag and drop them into either the theme or topic column.

Obj 9: Given a potential textbook theme, be able to name (write) ten example topics related to the theme.
Test item: You will be given a theme, and you will have 15 minutes to come up with 10 related topics. Write the related topics in the box below. Please provide short explanations of your topic as needed to explain relation to theme.

Obj 10: When given an example topic, be able to provide an example (write) of applying concentric design.
Test item: You will be given a topic, and you will have 15 minutes to describe, using examples, how you could extend the topic across three different proficiency levels using a concentric design approach.

Obj 11: Given a sample Scope and Sequence, be able to identify (label) the parts.
Test item: Label the parts on the following sample Scope and Sequence from an Advanced language textbook.

Obj 12: Using all references provided in the training, create (write) a skeleton Scope and Sequence for an Advanced LCTL textbook.
Test item: Now, using what you have learned throughout the entire course, create a Scope and Sequence for an Advanced textbook for the less commonly taught language of your choice. For the purposes of evaluation, please write 
the Scope and Sequence in English.
(^^Creating a good Scope and Sequence can take a long time…how can I put a time limitation on this???)

Sunday, June 2, 2013

W4: Making a Paper Prototype

Yes, this is a picture of my actual craft room.
I told you I am an avid crafter.
Okay, I feel right at home reading Carolyn Snyder's Chapter 4: Making a Paper Prototype (from Paper Prototyping: The Fast and Easy Way to Design and Refine User Interfaces). I am an avid crafter, and one of my favorite crafts is scrapbooking, so I already have pretty much everything she recommends from the card stocks to the 40 different colors of sharpies to the restickable glues and tapes and yes even the transparencies. (Hint: for the items she lists as being difficult to find in the office supply store, head to your local craft store...Michael's, Hobby Lobby, Joann's, etc...and check out the scrapbooking section.)

Summary:

The chapter is split into a few major sections:

  1. Paper Prototyping Materials - all of the office/scrapbooking supplies that you are likely to need (and those that Snyder doesn't recommend) to create an effective paper prototype.
  2. Creating a Background - Suggestions for how to represent the overall background/template of your interface.
  3. How to Prototype Interface Widgets - How to create the paper versions of things like buttons, check boxes, dialogue boxes, textfields, drop down lists, etc). She even suggests ways to represent cursors.
  4. Representing the Users' Choices
  5. Hand Drawn versus Screen Shots - Snyder suggests using good design, however using hand drawn and simple images/logos to represent it instead of complicated pictures/images, EXCEPT when representing specific information. The example she gave was showing merchandise for a clothing website, and in that case she recommends cutting out pictures of the items from a catalog.
  6. Simulating Interaction - Snyder gives suggestion for how to indicate interaction, such as tips and rollover messages, important sounds, drag and drop, animation, and scrolling, etc.
  7. Beyond the Computer Screen: Incorporating Other Elements - Snyder gives some explanations and examples of times when it might be necessary to represent non-software components of the instruction, such as Hardware Props (tape backup system, MP3 Player, etc) and Hardware devices (instrument panels, medical equipment, handheld devices, etc). Here is where she also addresses the role that real life people can play in the prototyping, such as a technique Snyder learned from Jared Spool called "Incredibly Intelligent Help." IIH is a technique where the facilitator acts as a "Help" command, recording the types of questions the testers have, and these can be the basis for creating the real Help section of the final product. Another roles include Human Actors (who represent call-line operators, online customer service reps, etc) and rarely "Wizard of Oz Testing" to represent very complicated interactions. All of this prototyping will also contribute to deciding what elements need to go into the Documentation, Help, and Training.
Comments:
Overall, I thought this chapter was very interesting to think of all of the ways we can represent our design and interaction using a few simple office supplies. As of right now, I'm still not sure exactly how the design for my project is going to play out, so I feel like I mostly read the suggestions quickly and plan on revisiting them as I really begin developing the prototype.

I do feel like many of her suggestions are way beyond the scope of a simple web-based training. It's obvious that her book is addressing web-based design, but also softwares, databases, etc.

I can really see the benefit of creating a prototype for more than just testing the user's user-ability. I think of times when I have been working with a developer who is trying to explain to me something that they want a module or app to be able to do, and it's difficult for them to make it clear what exactly they want. In the case of a very complicated design/software, I can see it being very helpful for a designer to be able to show the interaction to the programmers in this prototype so that they can see exactly the kind of interaction that the designer expects.

Saturday, June 1, 2013

W4: Effective Web Instruction, Part II

To continue reviewing Ted Frick and Elizabeth Boling's Effective Web Instruction: Handbook for Inquiry-Based Process, this summary will cover the second two parts: Preparing for Testing a Prototype and Testing a Prototype.

When testing a prototype, a designer should address 3 main questions:

  1. Is the instruction effective?
  2. Are the students satisfied with the instruction?
  3. Is the product usable by students?
In order to address the above, a designer should start by developing a paper prototype. This is supposed to save time on the development end and also encourage testers to be more honest about their feedback, since the idea is that the more draft a prototype looks, the more critical they should be willing to be if something doesn't seem right to them.
(Personally, I feel like the directions for developing the paper prototype sound pretty darn time consuming. At first I thought based on Myer's description of it that it would be a fast mock up, kind of like wireframing we did for Infographics in 541, but by reading what Frick and Boling want for a paper prototype it is incredibly detailed! With "links" that actually link to tabbed pages, etc. I haven't ever done one before, but I feel like I'm going to be spending about as much time making a "simple" word document prototype as I would an HTML version...just without pictures.)

Before administering the prototype to an authentic tester, one should administer a pre-mastery assessment, to make sure that the students don't already have all the skills needed to "pass" the mastery assessment.
(I'm concerned about the time aspect of this... my master assessment will include creating a mini-lesson based on an authentic text. So how much time do I have them work on creating this mini-lesson? For the assessment, it will probably be about 60 min. These are volunteers, so I'm always nervous about how much time I'm expecting them to donate to me.)

Before having the tester test the prototype, the observer should give the tester short and simple explanations of how the prototype works. For example, how to "click" the links." And they should ask the tester to *think* aloud as the work, and point out any problems they have. If during the actual process the tester isn't thinking aloud enough and/or not giving enough information with too general thoughts, the observer should prompt the tester. However, the observer should not answer questions or give help...just record what they observe as the tester works through the prototype.
(Would it be okay to start with a quick example prototype and demonstrate just what kind of thinking aloud we expect? I think that would be helpful to show them what kind of reasoning and thoughts we are really looking for.)

After the tester finishes the prototype, the observer should administer the mastery assessment. If the tester cannot successfully complete the mastery assessment, the designers must figure out why not and look for problems. However, Frick and Boling that even if they do complete the mastery assessment correctly, that does not mean there are no problems either.

At this point, the observer should also administer a formative evaluation survey (using the Likert scale) asking the tester to rate how they felt about the prototype.

Finally, the designers (and observers) should gather and review the data from the observation/prototype testing. They should be specifically interested in looking for patterns of problems. Based on this data and the results of the formative survey, they can decide what changes to make to the instruction/design.
(According to Frick and Boling, at this point we should make changes and re-test, and keep testing until the sample is saturated -no more patterns can be found; however, for the sake of time in this class, I do not see that as a logistical possibility. )

Sunday, May 26, 2013

W3: Changes in Student Motivation During Online Learning

In Changes In Student Motivation During Online Learning, Theodore Frick and Kyong-Jee Kim explore the factors that commonly affect student motivation in self-directed e-learning (SDEL) environments.

Review of the Literature
The article begins with a literature review of online learning which has thus far concentrated on evaluating motivation in online courses. Frick and Kim use this review to create a context on which to begin their own study on motivation in SDEL.

The main differences between online learning and SDEL are:

  1. Online learning is typically more similar to traditional classroom paradigms with an instructor and peer collaboration just in an online context (like the classes in IU's IST program). Where as SDEL offers limited to no student to student and/or teacher to student interaction.
  2. Online learning typically has more rigid parameters about pacing (because it is usually following a semester or other 3rd party time table). However, SDEL courses are usually self-paced and have little to no time constraints for completion.
Frick and Kim divide the factors affecting motivation in the literature into three main categories:
  1. Internal - Internal factors are explained as those factors that relate to how a learner feels about the learning. For example, do they feel in control? Do they feel it's relevant? Is the design clear and professional. Or is it too busy and confusing to navigate? The theory is basically that how the student feels about the instruction/training directly affects their motivation. Many of these internal factors can be linked back to Keller's ARCS model of motivation (attention, relevance, confidence, and satisfaction), however, there are other factors as well, such as whether the design is clean, professional, and easy to navigate or not; whether the tasks are within their zone of proximal development (that which they are capable of accomplishing with limited support); whether the instruction has the right balance of academic learning time (ALT) (a ratio to describe how long is spent on activities as define by their complexity and ease of solution), and others.
  2. External - External factors are the environmental factors that affect motivation. The two main ones listed being technical support and organizational support. Students reported higher satisfaction with a course if they felt they got the proper training and received positive support when they had difficulties. The literature also briefly mentions that feeling overwhelmed between a school/work/home balance is also an external factor.
  3. Personal - Personal factors are all of the personal learner variables that affect one's motivation. For example, the learner's temperament, age, gender, and prior knowledge and experience when they begin the class. There is conflicting opinion in the literature on whether or not learning styles substantially affect learner motivation in online learning situations.
Summary of Study of Motivational Factors in SDEL
Using the knowledge they gained from reviewing the literature, Frick and Kim began their own study of factors affecting motivation in SDEL learning environments.

The research questions (p. 7):
  • Which factors best predict learner motivation in the beginning, during, and end of self-directed e-learning (SDEL)?
  • Does learner motivation change as he or she goes through instruction in SDEL?
  • What factors are related to learner motivational change during SDEL?
Method
The context
Frick and Kim emailed 800 learners at a major US e-learning company which provides SDEL courses for personal professional development, cooperations, and universities. The course formats are "stand-alone, typically 6-8 hours long, self-paced instruction delivered via the web" (p. 8) that focus on information technology skills and "soft skills development (e.g. coaching skills, consulting skills)" (p. 8). These courses typically have no instructor, but learners do have the option of paying an extra fee if they want instructional support added to their course.

Participants
Frick and Kim sought out 400 undergrad and graduate students and 400 working professionals of various backgrounds. 368 responded with an almost equal distribution of students and employees and almost equal distribution of gender. The greatest age population was the 25-34 range (42%), with almost an equal distribution of 24 & younger, 35-44, and 45 & up. A good majority of respondents reported using the internet more than 20 hours a week and 3-5 software programs on a regular basis (so they are pretty familiar with technology).

The Research Instrument
Frick and Kim gathered quantitative data using a self-reporting questionnaire consisting of 59 multiple choice (Likert scale) questions and one open ended question about their general feelings on SDEL.

Data Collection and Analysis
The questionnaire was sent out to participants via listservs and email and the researchers received a 46% response rate. All responses were kept anonymous.

Results
Researchers found that the best factors in predicting learner motivation was:
  1. Perceived relevance: How the learner perceived relevance affects their starting motivation, which in turn affects the during motivation, and finally the overall positive change in learner motivation throughout the course.
  2. Reported technology competence: The number of software programs used on a regular basis and time spent on the internet each week directly affects learner motivation.
It seemed that the other factors were not statistically relevant for predicting learner motivation throughout and at the end of the instruction.


Additional Comments:
I feel like I personally relate more to the online learning scenarios as a student. And since starting my grad certificate in the IST program, I have definitely experienced some of the factors found to negatively affect motivation in online learning (and I concur in their affect). For example on pg. 5, it is discussed that a poorly designed website and breaks in technology can lead to learner frustration, I circled both of those, because I've had instances where the Oncourse links were so convoluted, I had trouble keeping track of what assignments were due when. And the different links weren't consistent with one another. That is really frustrating! Also, one class I took, nearly once a week one of my classmates or I had to point out to the instructor that there was a broken link to our resources. This often led to a delay of retrieving the needed resource, adding frustration.

I also doubled circled the point about the challenges adult learners face trying to strike a balance between work, home, and course demands. I was glad to see that this is something that designers are (theoretically) taking into consideration for us non-traditional students.

But perhaps my biggest circle (underline and asterix!) was on p 7 while distinguishing between the online learning and SDEL: "In SDEL, it may not be easy to find student peers for interaction - whether positive or for commiseration." I know that I am a talker. I like to talk about my problems (some might say overtalk them). And I can think of at least one class where being able to commiserate with my peers about our frustrations about the class, the instructor, and the disorganization of it all is what kept me going in that class. I have never taken an SDEL course, but I can imagine this would be a major factor for me if I did, which it seems is also an issue with SDEL learners as exemplified by their responses in the I don't want to learn by myself items (p. 11).