Friday, June 14, 2013

W6: Nine Ways to Reduce Cognitive Load in Multimedia Learning

Summary

In their article Nine Ways to Reduce Cognitive Load in Multimedia Learning, Richard Mayer and Roxana Moreno explore how we present information and how much we present at a time can affect learner's ability to absorb and process the new information.

Mayer and Moreno begin by defining multimedia learning ("learning from words and pictures") and multimedia instruction ("presenting words and pictures that are intended to foster learning") (p. 1). Furthermore, they explain that they are looking for not just learning but meaningful learning, where learners are able to demonstrate and understanding of the information by being able to apply it beyond the learning environment. (As opposed to learning it long enough to past a test, and then not really being able to do anything useful with it.)

Mayer and Moreno have three assumption about the brain and learning:

1. Dual assumption - people process information audibly/verbally and/or visually. A person can process the two things concurrently to one another, but not simultaneously (on the same channel in the mind).

2. Limited capacity assumption - while learning for most people is exponential, the amount of new information someone can meaningful process as a given time is limited.

3. Active procession assumption - it is a heavier cognitive load to process something visually (reading subtitles) and audibly (listening) than just processing one channel at a time.

It is based on these assumptions that they came up with the Cognitive theory of multimedia learning shown here.

Mayer and Moreno assert that you can encourage and increase meaningful learning by being mindful of the channels you require your students to use when designing your multimedia materials. They provide 5 case studies where the cognitive load was overloading learner capacity for meaningful learning. For each case, they offer one or multiple solutions for how to lighten the cognitive load by focusing on incorporating essential processing (the required elements for making the material understandable), weed out incidental processing (extra unnecessary features such as adding music, animated gifs, etc), and being mindful of representational holding (provide connect materials together when possible, to lesson the time the learner has to spend going back to find answers in your materials).

Summary
This was actually my favorite article of the class so far. All of the examples are realistic and the solutions are simple and reasonable. I think many of us could brainstorm and come up with the same or similar solutions in many of the cases, however, we didn't have names (effects) to describe why we would make the change. Also, I find that it is easier to review and find the cognitive overload problem areas in other people's design than in my own. I think creating a checklist of sorts of these effects as a cheat-sheet to double check my designs would be useful.

I will point out though that I don't feel these rules are absolutes when it comes to language learning through multimedia learning. Let me give an example:

In developing language learning materials, I will often see developers use a video and then add subtitles and call it a "listening exercise". If the subtitles are in the target language, I say "No, this is now a reading exercise, because your students aren't being forced to negotiate meaning through listening (a harder skill than reading) because they'll be reading." If the subtitles are in English, I say "No, this is now a cultural exercise, because your because your students aren't being forced to negotiate understanding through the target language." What I encourage instructors/developers to do (for creating a listening activity) is to not use subtitling (word for word) in either language, but instead to only caption (popping up a word or short phrase) at the bottom of the screen only for some words. Examples I give of relevant reasons to caption: 1. salient terms, 2. pointing out new grammar item, 3. emphasizing new vocabulary word, 4. help with meaning (in language learning we say that your goal is to present material that is i+1, which is *just* above their level of understanding, to push them to grow. However, if you go above that, then the learner won't learn because they become too frustrated and shut down.). In this case, we are using the multiple modality to draw attention to a new/important/difficult item in order to help the learner.

Monday, June 10, 2013

W5: Effective Web Instruction, Part III

The last two chapters of Frick and Boling's Effective Web Instruction are Chapter 5: Building and Web Prototype and Chapter 6: Assessing and maintaining the site.

Chapter 5 is split into four parts:

  1. Issues Regarding Current Web Technologies
  2. Further Limitations and Some Alternatives
  3. Types of Web Solutions, Depending on What You Need For Instruction
  4. Making Templates for Web Protypes.
My biggest criticism of this text is the obvious: It is so outdated! According to the footer, this text was last published on May 13, 2006, and 7 years is a long time to not update info on web technologies.

For example, it talks about students needing Netscape (is it even still in existence?), Internet Explorer (I know this is still around, but I don't know anyone who actually uses it), and AOL (did people still use AOL in 2006?). Today's browsers of choice are Mozilla Firefox, Safari, and Google Chrome.  It talks about students using Hotmail accounts (instead of Yahoo or gmail).

http://kitsindia.co.in/home.html
Important current teaching tools it doesn't mention: Dropbox (belongs where they discuss the limitations of sending documents to email), using wikis and blogs for students to collaborate, programs like Skype and Google chat (for free conferencing) or Adobe Breeze (costs money).

The main program for developing websites it talks about is Dreamweaver (which is what I use today), however, there are also a lot of programs out there for creating templates that are designed for creating interactive modules by people with less programming experience (like Adobe Captivate). These are WYSIWYG type programs that allow developers to add forms and interactivity in a way that most beginners can't do with programs like Dreamweaver.

Also, when discussing Flash in 2013, it's important to mention that flash isn't available on iPads (or other apple mobile devices), which would be a big reason to consider not using flash in development!

And that is the last thing I was going to mention: a chapter written in 2006 doesn't take into account mobile technology, specifically smart phones and tablets. Even the section on web templates talks about the familiar top nav bar and left nav bar popular on many sits, but those forms are actually not most conducive to designing sites that will be used on mobile technologies. This is something to consider if you are developing an instruction that will be accessed using mobile technologies.

(P.S. Frick mentions Ray Kurzweil, may I highly recommend the documentary on him Transcendent Man. It's on Netflix streaming.)

QUESTION: Can we use the same person to test the paper prototype and the web prototype?

The last chapter, Chapter 6, focuses on Assessing and Maintain the site. Reading this chapter reminds me of an old saying my dad always uses: "Measure twice. Cut once." Sure, most of the time he was literally talking about woodworking. However, I think this applies...

Frick and Boling say hey, if you think you are done testing, test one more time. And testing should include two parts: usability testing (measure 1 - testers running through the prototype) and a summative evaluation (measure 2), before going live (cut).

http://precisetestingsolution.com/betatesting.php
The main thing I learned in this chapter is that during the bug testing phases, it's best to break the team up into testers and fixers and only do one at a time. This actually makes a lot of sense, because I've been on plenty of group projects where we are all doing both. And a few times I've just told my team members after all the design was finished "just work through it and write down your problems and I'll fix them later." I was doing this to save time, because I didn't want people to get hung up trying to fix something when what we really needed to do was locate all of the problems first. Now I know it's a legitimate approach to bug testing. :)

I really like that in the checklist on page 108 that they mention the importance of the browser. When designing a web-based project, you have to check it on all browsers. I currently check all of mine on Safari, Firefox, Chrome, and Explorer. If there is one it works best with and/or one it is NOT compatible with, I add that note in the beginning so that the user is aware from the beginning. Also, as I mentioned about the flash issue before, if your instruction has flash, you should let the user know that it isn't compatible with some devices.

The description of bug testing explained here is definitely is more detailed and defined than anything I've used before and am interested in giving some of these strategies a try with testing this project.

The final section, Conducting the analysis, really is like a bookend to the beginning of the textbook, showing that in the analysis phase, you really need to make sure that the final product aligns with your initial goals. For example, making sure the audience, the stakeholders, the learning objectives, etc are met. And that it is appropriate to be taught using computer technology (although really isn't it a little late to determine it's not? :) )

Finally, I liked the idea of conducting and analyzing interviews of users (which I've never done) because you can see not only did the technology work?, but did the learners really understand the point of the instruction? which I can embarrassingly admit I have not done in the past.




Saturday, June 8, 2013

W5: What Makes e^3 Instruction?

Merrill's paper on What Makes e^3 (effective, efficient, engaging) Instruction? is especially interesting to me, because much like BYU Hawaii, our department is trying to develop an distance learning program for non-university student language learners* but we are very concerned without how to make the instruction effective, efficient, and engaging.

*IU students can take language courses for our languages through CEUS, but we are more interested in reaching the non-student learners, like military personnel, business men, aid-workers, government officials, etc. People who don't want to take semester long academic classes, but want to learn the language NOW for purposeful reasons.

In this article, Merrill explains that BYU Hawaii wanted to "(1) improve the quality of instruction and (2) reach more students by distance learning" (p. 1) and they worked to accomplish this by applying Merrill's first principles of instruction to their curriculum design. Specifically, they focused on utilizing problem-centered learning, incorporating more peer-interactions, and boosting technology-enhanced instruction.

Merrill explains that using problem-centered learning is important for e^3 because it boost learners past the associative memory phase (where one quickly forgets what they've learned if not given the opportunity to apply it in a timely manner) and into what he calls the mental model. The description of the mental model ("A problem-centered approach facilitates the adaption of an existing mental model or enables the learner to form a new mental model that integrates the various component skills into a meaningful whole." p. 1) reminds me of the constructivist learning theory, which hypothesizes that people learn through re-constructing their thoughts, ideas, understanding in their brain.

He explains that problem-centered instruction is a structured approach that involves guided teaching though demonstration, direct teaching, and allowing the students an opportunity to engage with problems in an increasingly difficult manner, while (it's not quite clear, but I'm assuming) slowly decreasing instructor support.

Peer interactivity is important for e^3 because it forces learners to test out their new mental models, not only in application, but also in peer review. This directly correlates to the methods of learning by teaching, where it is hypothesized that students actually absorb and acquire knowledge deeper when they are asked not only to demonstrate their knowledge, but also to teach it to their peers, cementing their understanding. BYU Hawaii seems to rely on Peer collaboration and Peer Critiques as their means of interactivity. Their justification for this means of peer interactivity is that it engages the student  by first applying the skills in their own solution, then working in a group to come up with a consensus solution, and finally, having to critique their peers based on their understanding of the problem and solutions, giving them several differentiated ways to interact with the material.

Using technology-enhanced instruction is key for providing the instruction to distance students, as well as supporting the engagement of the local learners. And Merrill gives a description of how the frame they created supports the instruction. However, I'll be honest that I'm still not quite sure how it all looks. I'd really like to see some screen captures and more concrete examples.

Overall, I thought this article was clear, simple, and direct. It gives me a lot to think about as I develop my own projects, because these are precisely the same ideas we are looking to incorporate our own online learning project development. As someone who learns by seeing and doing, this particular article would be awesome if turned into a learning module that actually incorporates the strategies being described!


Tuesday, June 4, 2013

Mastery Assessment

To be honest, I’ve had some difficult coming up with my Mastery Assessments as explained by Mager. My biggest confusion is how we are supposed to achieve Mager’s definition of assessments for SDEL environments. If this is a course where there is no instructor, than who is going to be measuring and evaluating the responses??

Obj 1: Without references, be able to name (write) the 5 Cs of Foreign Language Standards.
Obj 2:. Given 10 examples of textbook tasks/activities/exercises, be able to correctly identify (label) with 100% accuracy which of the 5 Cs of Foreign Language Standards is being exemplified.
Test Item: Read the following excerpts from a language textbook. For each excerpt, label the Foreign Language Standard that is being addressed. (If you believe there is more than one standard being addressed in an excerpt, label it with the one that is most obvious.)

Obj 3: Without references, be able to identify (label):
      a. a task.
      b. an activity
      b. an exercise.
Test item: Read the following excerpts from a language textbook. For each excerpt, label whether it is a task, activity, or exercise.

Obj 4: When provided with the ACTFL Proficiency Guideline*, be able to correctly identify (circle) three examples of Advanced level tasks/activities/exercises.
Test item: Read the following excerpts from a language textbook. Using the provided ACTFL Proficiency Guide, put a check next to the three excerpts that would be rated as Advanced Level Proficiency according to the scale.

Obj 5: When provided with the IRL Scale, be able to correctly identify (circle) three examples of Level 3 tasks/activities/exercises.
Test item: Read the following excerpts from a language textbook. Using the provided IRL Scale, put a check next to the three excerpts that would be rated as Level 3 Proficiency according to the scale.

Obj 6:  Without references, be able to explain (write) the differences between the purpose of the ACTFL Proficiency Guidelines and the IRL Scale.
Test item: Read the given scenarios, and write whether you would use the ACTFL Proficiency Guidelines or the IRL Scale. For each, write an explanation for your choice.

Obj 7:. Without references, be able to define (write) the Communicative Language Teaching Approach and list (write) the five features.
Test item: Imagine you have received the following email from a publisher asking for more grammar lessons in the textbook draft you submitted. Reply to the email by stating that your textbook uses the Communicative Language Teaching Approach, and provide a quick yet thorough explanation of CLT, including its five features.

Obj 8: Provided 5 themes and 5 topics, be able to discriminate (label) between a topic and a theme.
Test item: The themes and topics have been mixed up. Read the 10 items given and drag and drop them into either the theme or topic column.

Obj 9: Given a potential textbook theme, be able to name (write) ten example topics related to the theme.
Test item: You will be given a theme, and you will have 15 minutes to come up with 10 related topics. Write the related topics in the box below. Please provide short explanations of your topic as needed to explain relation to theme.

Obj 10: When given an example topic, be able to provide an example (write) of applying concentric design.
Test item: You will be given a topic, and you will have 15 minutes to describe, using examples, how you could extend the topic across three different proficiency levels using a concentric design approach.

Obj 11: Given a sample Scope and Sequence, be able to identify (label) the parts.
Test item: Label the parts on the following sample Scope and Sequence from an Advanced language textbook.

Obj 12: Using all references provided in the training, create (write) a skeleton Scope and Sequence for an Advanced LCTL textbook.
Test item: Now, using what you have learned throughout the entire course, create a Scope and Sequence for an Advanced textbook for the less commonly taught language of your choice. For the purposes of evaluation, please write 
the Scope and Sequence in English.
(^^Creating a good Scope and Sequence can take a long time…how can I put a time limitation on this???)

Sunday, June 2, 2013

W4: Making a Paper Prototype

Yes, this is a picture of my actual craft room.
I told you I am an avid crafter.
Okay, I feel right at home reading Carolyn Snyder's Chapter 4: Making a Paper Prototype (from Paper Prototyping: The Fast and Easy Way to Design and Refine User Interfaces). I am an avid crafter, and one of my favorite crafts is scrapbooking, so I already have pretty much everything she recommends from the card stocks to the 40 different colors of sharpies to the restickable glues and tapes and yes even the transparencies. (Hint: for the items she lists as being difficult to find in the office supply store, head to your local craft store...Michael's, Hobby Lobby, Joann's, etc...and check out the scrapbooking section.)

Summary:

The chapter is split into a few major sections:

  1. Paper Prototyping Materials - all of the office/scrapbooking supplies that you are likely to need (and those that Snyder doesn't recommend) to create an effective paper prototype.
  2. Creating a Background - Suggestions for how to represent the overall background/template of your interface.
  3. How to Prototype Interface Widgets - How to create the paper versions of things like buttons, check boxes, dialogue boxes, textfields, drop down lists, etc). She even suggests ways to represent cursors.
  4. Representing the Users' Choices
  5. Hand Drawn versus Screen Shots - Snyder suggests using good design, however using hand drawn and simple images/logos to represent it instead of complicated pictures/images, EXCEPT when representing specific information. The example she gave was showing merchandise for a clothing website, and in that case she recommends cutting out pictures of the items from a catalog.
  6. Simulating Interaction - Snyder gives suggestion for how to indicate interaction, such as tips and rollover messages, important sounds, drag and drop, animation, and scrolling, etc.
  7. Beyond the Computer Screen: Incorporating Other Elements - Snyder gives some explanations and examples of times when it might be necessary to represent non-software components of the instruction, such as Hardware Props (tape backup system, MP3 Player, etc) and Hardware devices (instrument panels, medical equipment, handheld devices, etc). Here is where she also addresses the role that real life people can play in the prototyping, such as a technique Snyder learned from Jared Spool called "Incredibly Intelligent Help." IIH is a technique where the facilitator acts as a "Help" command, recording the types of questions the testers have, and these can be the basis for creating the real Help section of the final product. Another roles include Human Actors (who represent call-line operators, online customer service reps, etc) and rarely "Wizard of Oz Testing" to represent very complicated interactions. All of this prototyping will also contribute to deciding what elements need to go into the Documentation, Help, and Training.
Comments:
Overall, I thought this chapter was very interesting to think of all of the ways we can represent our design and interaction using a few simple office supplies. As of right now, I'm still not sure exactly how the design for my project is going to play out, so I feel like I mostly read the suggestions quickly and plan on revisiting them as I really begin developing the prototype.

I do feel like many of her suggestions are way beyond the scope of a simple web-based training. It's obvious that her book is addressing web-based design, but also softwares, databases, etc.

I can really see the benefit of creating a prototype for more than just testing the user's user-ability. I think of times when I have been working with a developer who is trying to explain to me something that they want a module or app to be able to do, and it's difficult for them to make it clear what exactly they want. In the case of a very complicated design/software, I can see it being very helpful for a designer to be able to show the interaction to the programmers in this prototype so that they can see exactly the kind of interaction that the designer expects.

Saturday, June 1, 2013

W4: Effective Web Instruction, Part II

To continue reviewing Ted Frick and Elizabeth Boling's Effective Web Instruction: Handbook for Inquiry-Based Process, this summary will cover the second two parts: Preparing for Testing a Prototype and Testing a Prototype.

When testing a prototype, a designer should address 3 main questions:

  1. Is the instruction effective?
  2. Are the students satisfied with the instruction?
  3. Is the product usable by students?
In order to address the above, a designer should start by developing a paper prototype. This is supposed to save time on the development end and also encourage testers to be more honest about their feedback, since the idea is that the more draft a prototype looks, the more critical they should be willing to be if something doesn't seem right to them.
(Personally, I feel like the directions for developing the paper prototype sound pretty darn time consuming. At first I thought based on Myer's description of it that it would be a fast mock up, kind of like wireframing we did for Infographics in 541, but by reading what Frick and Boling want for a paper prototype it is incredibly detailed! With "links" that actually link to tabbed pages, etc. I haven't ever done one before, but I feel like I'm going to be spending about as much time making a "simple" word document prototype as I would an HTML version...just without pictures.)

Before administering the prototype to an authentic tester, one should administer a pre-mastery assessment, to make sure that the students don't already have all the skills needed to "pass" the mastery assessment.
(I'm concerned about the time aspect of this... my master assessment will include creating a mini-lesson based on an authentic text. So how much time do I have them work on creating this mini-lesson? For the assessment, it will probably be about 60 min. These are volunteers, so I'm always nervous about how much time I'm expecting them to donate to me.)

Before having the tester test the prototype, the observer should give the tester short and simple explanations of how the prototype works. For example, how to "click" the links." And they should ask the tester to *think* aloud as the work, and point out any problems they have. If during the actual process the tester isn't thinking aloud enough and/or not giving enough information with too general thoughts, the observer should prompt the tester. However, the observer should not answer questions or give help...just record what they observe as the tester works through the prototype.
(Would it be okay to start with a quick example prototype and demonstrate just what kind of thinking aloud we expect? I think that would be helpful to show them what kind of reasoning and thoughts we are really looking for.)

After the tester finishes the prototype, the observer should administer the mastery assessment. If the tester cannot successfully complete the mastery assessment, the designers must figure out why not and look for problems. However, Frick and Boling that even if they do complete the mastery assessment correctly, that does not mean there are no problems either.

At this point, the observer should also administer a formative evaluation survey (using the Likert scale) asking the tester to rate how they felt about the prototype.

Finally, the designers (and observers) should gather and review the data from the observation/prototype testing. They should be specifically interested in looking for patterns of problems. Based on this data and the results of the formative survey, they can decide what changes to make to the instruction/design.
(According to Frick and Boling, at this point we should make changes and re-test, and keep testing until the sample is saturated -no more patterns can be found; however, for the sake of time in this class, I do not see that as a logistical possibility. )

Sunday, May 26, 2013

W3: Changes in Student Motivation During Online Learning

In Changes In Student Motivation During Online Learning, Theodore Frick and Kyong-Jee Kim explore the factors that commonly affect student motivation in self-directed e-learning (SDEL) environments.

Review of the Literature
The article begins with a literature review of online learning which has thus far concentrated on evaluating motivation in online courses. Frick and Kim use this review to create a context on which to begin their own study on motivation in SDEL.

The main differences between online learning and SDEL are:

  1. Online learning is typically more similar to traditional classroom paradigms with an instructor and peer collaboration just in an online context (like the classes in IU's IST program). Where as SDEL offers limited to no student to student and/or teacher to student interaction.
  2. Online learning typically has more rigid parameters about pacing (because it is usually following a semester or other 3rd party time table). However, SDEL courses are usually self-paced and have little to no time constraints for completion.
Frick and Kim divide the factors affecting motivation in the literature into three main categories:
  1. Internal - Internal factors are explained as those factors that relate to how a learner feels about the learning. For example, do they feel in control? Do they feel it's relevant? Is the design clear and professional. Or is it too busy and confusing to navigate? The theory is basically that how the student feels about the instruction/training directly affects their motivation. Many of these internal factors can be linked back to Keller's ARCS model of motivation (attention, relevance, confidence, and satisfaction), however, there are other factors as well, such as whether the design is clean, professional, and easy to navigate or not; whether the tasks are within their zone of proximal development (that which they are capable of accomplishing with limited support); whether the instruction has the right balance of academic learning time (ALT) (a ratio to describe how long is spent on activities as define by their complexity and ease of solution), and others.
  2. External - External factors are the environmental factors that affect motivation. The two main ones listed being technical support and organizational support. Students reported higher satisfaction with a course if they felt they got the proper training and received positive support when they had difficulties. The literature also briefly mentions that feeling overwhelmed between a school/work/home balance is also an external factor.
  3. Personal - Personal factors are all of the personal learner variables that affect one's motivation. For example, the learner's temperament, age, gender, and prior knowledge and experience when they begin the class. There is conflicting opinion in the literature on whether or not learning styles substantially affect learner motivation in online learning situations.
Summary of Study of Motivational Factors in SDEL
Using the knowledge they gained from reviewing the literature, Frick and Kim began their own study of factors affecting motivation in SDEL learning environments.

The research questions (p. 7):
  • Which factors best predict learner motivation in the beginning, during, and end of self-directed e-learning (SDEL)?
  • Does learner motivation change as he or she goes through instruction in SDEL?
  • What factors are related to learner motivational change during SDEL?
Method
The context
Frick and Kim emailed 800 learners at a major US e-learning company which provides SDEL courses for personal professional development, cooperations, and universities. The course formats are "stand-alone, typically 6-8 hours long, self-paced instruction delivered via the web" (p. 8) that focus on information technology skills and "soft skills development (e.g. coaching skills, consulting skills)" (p. 8). These courses typically have no instructor, but learners do have the option of paying an extra fee if they want instructional support added to their course.

Participants
Frick and Kim sought out 400 undergrad and graduate students and 400 working professionals of various backgrounds. 368 responded with an almost equal distribution of students and employees and almost equal distribution of gender. The greatest age population was the 25-34 range (42%), with almost an equal distribution of 24 & younger, 35-44, and 45 & up. A good majority of respondents reported using the internet more than 20 hours a week and 3-5 software programs on a regular basis (so they are pretty familiar with technology).

The Research Instrument
Frick and Kim gathered quantitative data using a self-reporting questionnaire consisting of 59 multiple choice (Likert scale) questions and one open ended question about their general feelings on SDEL.

Data Collection and Analysis
The questionnaire was sent out to participants via listservs and email and the researchers received a 46% response rate. All responses were kept anonymous.

Results
Researchers found that the best factors in predicting learner motivation was:
  1. Perceived relevance: How the learner perceived relevance affects their starting motivation, which in turn affects the during motivation, and finally the overall positive change in learner motivation throughout the course.
  2. Reported technology competence: The number of software programs used on a regular basis and time spent on the internet each week directly affects learner motivation.
It seemed that the other factors were not statistically relevant for predicting learner motivation throughout and at the end of the instruction.


Additional Comments:
I feel like I personally relate more to the online learning scenarios as a student. And since starting my grad certificate in the IST program, I have definitely experienced some of the factors found to negatively affect motivation in online learning (and I concur in their affect). For example on pg. 5, it is discussed that a poorly designed website and breaks in technology can lead to learner frustration, I circled both of those, because I've had instances where the Oncourse links were so convoluted, I had trouble keeping track of what assignments were due when. And the different links weren't consistent with one another. That is really frustrating! Also, one class I took, nearly once a week one of my classmates or I had to point out to the instructor that there was a broken link to our resources. This often led to a delay of retrieving the needed resource, adding frustration.

I also doubled circled the point about the challenges adult learners face trying to strike a balance between work, home, and course demands. I was glad to see that this is something that designers are (theoretically) taking into consideration for us non-traditional students.

But perhaps my biggest circle (underline and asterix!) was on p 7 while distinguishing between the online learning and SDEL: "In SDEL, it may not be easy to find student peers for interaction - whether positive or for commiseration." I know that I am a talker. I like to talk about my problems (some might say overtalk them). And I can think of at least one class where being able to commiserate with my peers about our frustrations about the class, the instructor, and the disorganization of it all is what kept me going in that class. I have never taken an SDEL course, but I can imagine this would be a major factor for me if I did, which it seems is also an issue with SDEL learners as exemplified by their responses in the I don't want to learn by myself items (p. 11).


W3: Effective Web Instruction

Ted Frick and Elizabeth Boling simplify the process of designing computer mediated learning in Effective Web Instruction: Handbook for Inquiry-Based Process.

This easy to read handbook concisely describes the essential steps for creating effective computer based training. It is split up into 6 main parts: Getting Started, Conducting the Analysis, Preparing for Testing a Prototype, Testing a Prototype, Building a Web Prototype, and Assessing and Maintaining the site.

This summary will cover the first two parts: Getting Started and Conducting the Analysis.

Getting Started
In Getting Started, Frick and Boling explain that to design an inquiry-based web instruction, one must follow an iterative process of making "empirical observations to answer questions and make decisions" (p 4), giving you a chance to revaluate and adjust the training as necessary throughout the design process.

Frick and Boling stress that a designer need not be an expert in all stages of the design process, as long as they hire good people for the design team whose strengths compliment their teammates' weaknesses. They list the most important members of a web-based instruction design team as: Analyst, Instructional Designer, Information, design, and graphic designer, Web Technology Specialist, Evaluation/usability testing specialist, and Web server & system administrator.

Conducting the analysis

Develop your Instructional Goals
Frick and Boling assert that the first thing to do when developing instructional goals is to determine who the "stakeholders" are. The stakeholders are the people most directly (and sometimes indirectly) affected by the training and it's outcome. The people who are most invested in the training and most affected by it's success (or failure). In the case of my project, the stakeholders are
  • me (as the Language Instruction Specialist of the department, it's my job to teach pedagogy principles and lead professional development),
  • my boss (because his reputation is affected by the work CeLCAR publishes) 
  • CeLCAR, the department I work for (part of our grant stipulates providing professional development oppurtunities for LCTL teachers and language developers, and every four years we have to reapply for funding)
  • US Department of Education/Title VI (because they provide the majority of our funding and evaluate our materials)
  • Indiana University/College of Arts and Sciences (because they provide partial funding and it is their name on the final project)
  • Language Developers (they will be using the training to help them develop language textbooks)
  • LCTL Teachers (because they will be using the finished product to teach their classes)
  • LCTL Students (because they will be using the textbooks to learn the languages)
For this project, I plan on focusing on the stakeholders I have immediate access to: my boss, CeLCAR, the language developers, LCTL Teachers, and LCTL students.

Form the Instructional goals and identify their indicators
Once you have determined the stakeholders, you can think about what you want the learners to be able to do at the end of the training. They go even one step forward and encourage a designer, while working on Instructional goals to think about how one can assess whether or not a learner acquired the desired ability/skill/knowledge. I especially enjoyed this discussion, because these concepts are mostly new to me (they are very different than what we discussed in my Testing courses, which focused more  on how to write good traditional tests). The examples given on the differences between "more or less efficient and more or less authentic" (p 11) were very useful at illustrating the different levels of efficiency and authenticity available. 

Obviously I wrote my instructional goals before thinking about mastery. And I am still unsure of how I am going to assess them. Which means, as I start thinking of mastery assessment (due next week), I expect I might have to review my instructional objectives as well. (It is an iterative process after all!)

Learner Analysis
Conducting a thorough learner analysis is important for determining motivation, predicting possible difficulties, planning learning, etc. In the case of my project, I am very familiar with my learner audience. I feel my biggest obstacles will include:
  • second language speakers
  • work full time (they could view this training as another drag on their time)
  • many developers/teachers of LCTL do not have much formal language education experience (be careful not to teach beyond their zone of proximal development)
I will have to continue working on a thorough learner analysis in order get a more complete view of my learner. (Their comfort with technology, etc)

Context Analysis
Frick and Boling go to lengths to describe that web-based instruction should never be designed as a just because you can choice. Instead, a designer should have sufficient justification for choosing a web-based context over a standard pen and paper and/or classroom delivery. 

In the case of my project, my justification is: 
  • time (instructors/developers can complete the training on their own and at their own speed. No meetings to attend that interfere with their other many duties)
  • money (we can't afford to host a multi-developer in-person training)
  • distance (many of our potential developers are located oversees, and they are unable to come attend in person)
I'm still not sure of the resources we will need, besides a computer and internet access. I will work on this more as the project develops.

Wednesday, May 22, 2013

Individual project

Project Title: Fundamentals of Designing and Developing Advanced Level Textbooks for Less Commonly Taught Languages (LCTLs)

Target Audience: Language teachers and/or language teaching materials developers. Men and women. Ages mid-20 to late 50s. Advanced degrees in language education and/or applied linguistics. International (some living in the US and some living abroad). Primarily non-native English speakers.

Description of Problem: CeLCAR, is a title VI, federally funded Language Resource Center that creates learning materials for teaching the languages and cultures of the Central Asian region. In the past, language developers have worked independently and without much guidance for developing language textbooks. Recently, new developers have used previously developed textbooks as a guide for developing new textbooks, as well as consulting with an on-staff language pedagogist. However, this one-on-one/face-to-face training is not as organized and efficient as it could be. Also, it is especially ineffective for working with developers living overseas.

By creating a CBT course to teach the fundamentals of designing and developing advanced level textbooks for LCTL developers, the center will save time during the development process. Additionally, by reminding the developers of the foundations of language education, the content and effectiveness of the materials will be strengthened. And finally, creating a CBT will allow non-local developers access and benefit from the training as well.

Instructional Objectives:
1. Without references, be able to name (write) and define (write) the 5 Cs of Foreign Language Standards.
2. Given 10 examples of textbook tasks/activities/exercises, be able to correctly identify (label) with 100% accuracy which of the 5 Cs of Foreign Language Standards is being exemplified (if any).
3. Without references, be able to identify (circle):
       a. a task.
       b. an activity
       b. an exercise.
4. When provided with the ACTFL Proficiency Guideline*, be able to correctly identify (circle) three examples of Advanced level tasks/activities/exercises.
5. When provided with the IRL Scale, be able to correctly identify (circle) three examples of Level 3 tasks/activities/exercises.
6. Without references, be able to explain (write) the differences between the purpose of the ACTFL Proficiency Guidelines and the IRL Scale.
7. Without references, be able to define (write) the Communicative Language Teaching Approach and list (write) the five features.
8. Provided 5 themes and 5 topics, be able to discriminate (label) between a topic and a theme.
8. Given a potential textbook theme, be able to name (write) ten example topics related to the theme.
9. When given an example topic, be able to provide an example (write) of applying concentric design.
10. Given a sample Scope and Sequence, be able to identify (label) the parts.
11. Using all references provided in the training, create (write) a skeleton Scope and Sequence for an Advanced LCTL textbook.

* Communication, Culture, Comparisons, Communities, and Connections
** American Council on Teaching Foreign Languages
*** Interagency Language Roundtable

W2: Mager's Instructional Objectives.

The first time I read Dr. Mager's Preparing Instructional Objectives: A critical tool in the development of effective instruction was last summer, in my R521 class.  I had experience writing learning objectives through my education and experience as an English teacher, however, I had never experienced writing such technical and specific instructional objectives.

Summary

The book covers how to write instructional objectives, but it also addresses why clearly stated objectives are so important and provides opportunities for learners to identify components and practice writing their own.

According to Mager, all instructional objectives should contain an audience (the who), a behavior (what the learner is expected to do), a condition (a context, a situation, etc), and a degree (quality of performance the learner must achieve, time limits for completion, etc.).

My favorite part of the book is probably the section on Gibberish (pg 142-143). Being in the field of education, I'm used to people relying on gibberish and holding it in a weird esteem. I'm much more fond of the short, simple, and say what you really mean approach that Mager endorses.

What I find most difficult about writing Mager-style instructional objectives is coming up with the degrees that meet his criteria. Because most of my trainings are professional developments for teachers and/or computer assisted language learning related, I find it very difficult to come up with meaningful, accurate, and effective measurement criteria for the objectives.

The aspect of Mager-style objectives I found most superfluous is the audience. I don't really see the point of saying "The student will be able to" over and over when by the nature of trainings and writing objectives it is directly implied that it is the student that will be able to demonstrate the ability. It's not like without directly stating this that one could get confused and think it was someone other than the student. It would seem to me that one should be able to define their audience before stating the objectives, and then the audience for all objectives should just be assumed to be the students defined in the audience.

W2: Merrill's Five-Star Rating Scale

(Sorry for the delayed postings this week. I'm recovering from a mean stomach bug. Yuck!)

Summary of 5 Star Instructional Design Rating by M. David Merrill

Merrill's 5-Star Instructional Design Rating is an evaluation tool for rating courseware according to the First Principles of Instructional Design, using a three star rating method (bronze, silver, and gold).


For each of the first principles identified, task-centered approach, activation, demonstration, application, and integration (Merrill et al, 2008), Merrill provides three questions to be used for evaluating how well the principle is being addressed within the instruction. It is not clearly stated, however, one can deduce that if all questions are "yes" than that principle receives a gold star rating. If two of three questions are "yes" than the principle receives a silver star. And of course, if only one question is "yes" than it receives a Bronze. Presumably, if all questions are "no", then no star is awarded.


My biggest issue with this rating system is that it's too subjective. I would think it is a fine tool for Merrill to use, because he knows exactly what his own criteria is for the questions. However, I don't think if you tested it for reliability between a set of different raters that there would be a consistent rating between the raters.


For example: 3. b. (2) Multiple representations are used for the demonstrations? Are two representations sufficient? Or three? Or four? What constitutes multiple? Or 3. c. Is media relevant to the content and used to enhance learning. What would be considered relevant to one evaluator might not be equally relevant to a rater who has higher standards/more experience/different expectations to another rater.


So while it seems like a good start, I really feel that if this is going to be tool used universally for evaluating instruction, more work needs to go into developing descriptive parameters to improve reliability between raters. (I haven't checked my peers' evaluations yet, but if two of us evaluated the same instruction, it would be interesting to comparing our ratings in order to examine the 5-star rating reliability.)


Speaking of reviewing: most computer mediated learning that I'm familiar with fall into the receptive ("spray and pray") or exploratory ("sink-or-swim") categories, so I decided to evaluate one of the examples that Dr. Myers provided us. I chose the Understanding Creditor Statements by Wendy Baez (2006).



Is the instructional architecture tutorial or experiential? Yes. The course is a tutorial for understanding credit card terms, statements, and finance charges. I would say it does go beyond receptive or exploratory.
Is the courseware TELL-&-ASK (T&A) instruction? Here is where I'm on the fence. Yes, this courseware provides multiple choice/short answer questions as part of the individual modules (with immediate feedback), which seems to fit the description of Tell&Ask. And, it also has a final assessment, which would seem to not be T&A. But then again, the final assessment seems to also just be a multiple choice test. So how is that different than being T&A? The learner doesn't have to produce anything as part of demonstrating understanding.

I'm going to assume, since this was provided as a sample to review that Dr. Myers does not consider this T&A, thus making it inappropriate for this 5-star rating. However, I think I will need more discussion to understand why these questions are not considered just Tell&Ask.

a. Does the courseware show learners the task they will be able to do or the problem they will be able to solve as a result of completing a module or course? b. Are students engaged at the problem or task level not just the operation or action levels? c. Does the courseware involve a progression of problems rather than a single problem?
1. Is the courseware presented in the context of real world problems? No. The modules clearly states the learning objects and sometimes uses examples as part of the lesson. However, they do not begin the instruction with a demonstration/authentic example of what the learner will be able to do as a result of the training.  Yes. I find the practice engaging. It could be improved by asking learners to practice on their own bank statements (for example for Lesson 2: Locating Statement Information especially), however there are logistical reasons why that probably wouldn't work a t this time. Yes. Each module has questions and/or practice sections. And then there is a final assessment at the very end of the course.



a. Does the courseware direct learners to recall, relate, describe, or apply knowledge from relevant past experience that can be used as a foundation for new knowledge? b. Does the courseware provide relevant experience that can be used as a foundation for the new knowledge? c. If learners already know some of the content are they given an opportunity to demonstrate their knowledge?
2. Does the courseware attempt to activate relevant prior knowledge or experience? Yes. The very first thing the course does in Lesson 1 is show a quote and say "Do you recognize the previous statement?" And throughout the course, there is reference to the learners experience and own personal statements. Yes. I feel like the use of authentic statements and calculators helps build a firm foundation for applying the new knowledge. No. I do not see where the learner can demonstrate prior knowledge. The course seems to focus on the new information being taught.

a. Are the demonstrations (examples) consistent with the content being taught? (1) Examples and non-examples for concepts? (2) Demonstrations for procedures? (3) Visualizations for processes? (4) Modeling for behavior? b. Are at least some of the following learner guidance techniques employed? (1) Learners are directed to relevant information? (2) Multiple representations are used for the demonstrations? (3) Multiple demonstrations are explicitly compared? c. Is media relevant to the content and used to enhance learning?
3. Does the courseware demonstrate (show examples) of what is to be learned rather than merely tell information about what is to be learned? Yes. The examples used (interactive statements, statements, calculators, case studies, etc) and good examples and useful for the learner to understand the concepts and visualizations. Yes and no. There are multiple representations (examples) used, however I don't feel they are explicitly compared. And there really isn't any redirecting to relevant information (unless you count linking the the glossary page as redirecting to relevant information). This really depends on one's definition of media. The course does not use music or video, but it does use pictures. And it even has a picture with roll over features on it for a more interactive experience. So, I guess I will give it a Yes, mostly due to the interactive statement.

a. Are the application (practice) and the posttest consistent with the stated or implied objectives? (1) Information-about practice requires learners to recall or recognize information. (2) Parts-of practice requires the learners to locate, name, and/or describe each part. (3) Kinds-of practice requires learners to identify new examples of each kind. (4) How-to practice requires learners to do the procedure. (5) What-happens practice requires learners to predict a consequence of a process given conditions, or to find faulted conditions given an unexpected consequence. b. Does the courseware require learners to use new knowledge or skill to solve a varied sequence of problems and do learners receive corrective feedback on their performance? c. In most application or practice activities, are learners able to access context sensitive help or guidance when having difficulty with the instructional materials? Is this coaching gradually diminished as the instruction progresses?
4. Do learners have an opportunity to practice and apply their newly acquired knowledge or skill? Yes. Module 2 and Module 3 especially have good practice sections that use different kinds of practice to support the state objectives and require the leaner to demonstrate understanding in order to answer the questions correctly. Yes. Learners have to solve a variety of problems and are provided immediate feedback. Yes and no. There is no "Help" section or place for the learner to directly go for help. However, new terms are hyperlinked throughout the course, linking back to definitions and/or explanations.

a. Does the courseware provide an opportunity for learners to publicly demonstrate their new knowledge or skill? b. Does the courseware provide an opportunity for learners to reflect-on, discuss, and defend their new knowledge or skill? c. Does the courseware provide an opportunity for learners to create, invent, or explore new and new knowledge or skill?
5. Does the courseware provide techniques that encourage learners to integrate (transfer) the new knowledge or skill into their everyday life? No. There is an assessment the learner can take at the end. It is through a 3rd party website (but the link is not active). However, I wouldn't consider this publicly demonstrating knowledge. No. The questions and practice do not seem to lead themselves for discussion, reflection, or collaboration. No. No star.
Overall rating:

REFERENCE


Merrill, M. D., Barclay, M., & Schaak, A. v. (2008). Prescriptive Principles for Instructional Design. In AECT Handbook (pp. 173-184).

Saturday, May 11, 2013

Week 1: Prescriptive Principles for Instructional Design

Summary of Prescriptive Principles for Instructional Design by M. David Merrill, Matthew Barclay, and Andrew van Schaak

First Principles

Merrill, Barclay, and van Schaak introduce the First Principles of Instruction that they have come up with after examining many instructional design theories and models and looking for the underlying prescriptive principles common among almost all of the theories and models.

These First Principles are TADAI:
http://2.bp.blogspot.com

  • Task-centered problem (learning through doing)
  • Activation (accessing prior knowledge)
  • Demonstration (demonstrating and examples)
  • Application (practice WITH feedback)
  • Integration (reflection, discussion, etc...constructivist learning)
Merrill et al have determined that all ISD models follow these same principles; however, not all models use all the the principles. They theorize that there is a direct correlation between the effectiveness of an instruction and the number of principles they employ. A study done by Merrill and Thompson with NETg supported this theory, because the group trained using the First Principles performed notably higher than the group that only received demonstration and significantly higher than the control group that received no instruction. In addition to the difference in performance (both on mastery of skills and time took to complete the assessment), student and instructor feedback also supported the hypothesis because the First Principles group reported the highest feelings of satisfaction with the instruction.

Other Instructional Design Principles

Merrill et al then introduced several ID models (Clark & Mayer's Principle for Multimedia and E-Learning, Vad der Meij's Minimalist Principles, Foshay et al's Cognitive Training Model, Seidel et al's Instruction Principles based on Learning Principles, and van Merrienboer's 4C/ID model) and then showed via comparison tables how they perceived the model's principles to align with the identified First Principles. 

http://people.senecac.on.ca/
Designing Task-Center Instruction

Finally, Merrill et al suggests an approach for designing instruction that incorporates all of the First Principles, Merrill's Pebble-in-the-Pond approach. And then provides a table that illustrates the approach, using general explanations for each step and how the steps interact with one another


My response

Overall, I found that I accepted Merrill et al's theory of First Principles; however, sometimes the broadness/generality of the principles seemed too vague to me. For example, one of the First Principles is Application Principle; part of its explanation is that the learners will "receive intrinsic or corrective feedback" (p. 175), but it doesn't specify timeliness of feedback. For the most part, I have accepted good feedback to mean timely feedback. And many would say that timely feedback equals instant feedback (especially when it comes to online learning). However Allen's eLearning principles state "Delay judgement: if learners have to wait for confirmation, they will typically reevaluate for themselves while the tension mounts - essentially reviewing and rehearsing" (p. 179). Merrill et al identify this as applying the Application Principle, so I'm interested to see how they could explain that the same principle would be also applied for van Meji's "Provide on-the-spot error information" (p. 179). They list both as examples of Application Principle. Do they not care how/when feedback is given, only that it is given? This seems odd to me.

Secondly, I had a hard time distinguishing in my mind the difference between Task-centered Approach and Application Principle, because I typically think of the practice you do while completing a task-based activity (learning while doing) is the application (practice). I'm not sure if they are saying that the task-centered goal is the general approach to the design (major objective: change the oil in a car) and to achieve this you must provide application practice with feedback for the steps to achieve the overall task/learning  (the individual learning objectives/steps needed to complete to successfully change the oil). Or if it is just saying that the task-centered portion is only during the actual active learning (content) and the application is only during the practice portion after the content portion. I may need a little more discussion to understand the difference between these two.

This is the first time I've encountered Merrill's Pebble-in-the-Pond approach, and I'm not really sure I understand it. I will try to summarize it using a language learning example:
  1. Whole task/Identify a whole task - Have a conversation about age and birthdays in X language (beginner level).
  2. Progression/Identify series of subtasks needed to achieve whole task - How to ask and give you age, how to ask and give a date
  3. Components/Identify components to achieve subtasks - How to ask and give you age: appropriate grammar and syntax (verb choice and conjugation: I am # years vs. I have # years, modification for case of noun and number adjective, etc), cultural issues (is it impolite to ask an age? do people generally lie and/or not know their age?); how to ask and give a date (month vocabulary, how to express years, order of date mm/dd/yy), cultural issues (not know their birthdate)
  4. Strategy/Specify an instructional strategy - modeling (show a video), scaffolding (teacher/student practice), cooperative learning (student/student practice), etc
  5. Interface/Specify the user interface - ?? Online vs. classroom? Small group vs. pair work? Identify multimedia.  Or is this approach only for elearning??
  6. Production/Produce the course - self explanatory.
Comments on how I applied this appreciated!

A few other comments as I read: 
  • While overall I like Clark & Mayer's Principles for Multimedia Learning, I was surprised that they advocate "students learn better from animation and narration than from animation and narration and text (p. 147)" (p. 178). While this may be true, I would be wary of designing an instruction that didn't provide text as well as narration because of accessibility for hearing impaired learners. I am very familiar with adding alt descriptions for visual items in a program to be read by screenreaders for the visual impaired, however, I don't know what the alternative is for hearing impaired people if you do not provide text somewhere. 
  • I was also interested why van Meij says to "Prevent mistakes whenever possible" (p. 179) as part of the Minimalist Principles. In language learning, we don't discourage mistakes, because they are most often learning oppurtunities. Often times making a mistake and then having to correct oneself for comprehension makes a bigger impact on language acquisition than direct teaching. (Obviously you have to be wary of repeated mistakes without self-corrections however.)
A random criticisms:
  • Did anyone else notice they have two Table 14.1, 14.2, 14.3, and 14.4? I get that they were in different sections of the chapter, but they should have either carried them forward in the chapter as 14.6 and beyond or done something to differentiate between the chapter sections like 14-1.1 14-2.1 etc. JMO. :)
  • Why didn't they follow Clark & Mayer's principle of "Students learn better when corresponding words and pictures (tables!) are presented near rather than far from each other on the page or screen" (p. 177). Small criticism, but I hate having to flip back and forth between pages with the paragraph isn't on the same page as the Table to which it refers!

Thursday, May 9, 2013

Hello World!


Hi everyone! I'm Amber. I actually live in Bloomington, IN, but I am working on my IST Certificate via distance learning because I am a working mother of 2 (twin boys turning 6 in June). It's much easier to go "back to school" in the comfort of my home office while my kiddos are sleeping in the nest room than to attend classes in the flesh.

I currently work full time at IU as an academic specialist developing language materials and curriculums.  My specialty is language learning technology.

I decided that I wanted to pursue a certificate in IST for two reasons: #1 - to improve my productivity and effectiveness in my current position. And #2 - to broaden my qualifications from the world of language learning technology to more generally instructional technology and design so that I will be more competitive in the job market.

The main reason I'm interested in IST is because I am a "let's not mess around...do it right the first time and then let's all go home" type person. When I'm at work, that is time away from my family. I want all time not being spent with my family to be effective and efficient, because I want to feel like I'm away from them for a purpose. I cannot stand disorganization and inefficiency in the workplace. It's a waste of the company's money, but I also feel like it's a waste of my time.

I like IST/ISD because it is a systematic attack that finds what isn't working, proposes solutions, and applies them! Then, perhaps my favorite part: EVALUATES whether or not the changes are actually making improvements! This part is so important and so often neglected (at least in academia)!

I feel like I have a good foundation of IST applications through my working experience, but I'm looking for the theoretical education and training, and of course, accreditation via a certificate.
 
An interesting thing about me is that I'm taking this class with my dad! He graduated with an MS in IST from IU in May 2010, and he is considering applying for the EdD, so this class is him testing the waters.