Sunday, May 26, 2013

W3: Changes in Student Motivation During Online Learning

In Changes In Student Motivation During Online Learning, Theodore Frick and Kyong-Jee Kim explore the factors that commonly affect student motivation in self-directed e-learning (SDEL) environments.

Review of the Literature
The article begins with a literature review of online learning which has thus far concentrated on evaluating motivation in online courses. Frick and Kim use this review to create a context on which to begin their own study on motivation in SDEL.

The main differences between online learning and SDEL are:

  1. Online learning is typically more similar to traditional classroom paradigms with an instructor and peer collaboration just in an online context (like the classes in IU's IST program). Where as SDEL offers limited to no student to student and/or teacher to student interaction.
  2. Online learning typically has more rigid parameters about pacing (because it is usually following a semester or other 3rd party time table). However, SDEL courses are usually self-paced and have little to no time constraints for completion.
Frick and Kim divide the factors affecting motivation in the literature into three main categories:
  1. Internal - Internal factors are explained as those factors that relate to how a learner feels about the learning. For example, do they feel in control? Do they feel it's relevant? Is the design clear and professional. Or is it too busy and confusing to navigate? The theory is basically that how the student feels about the instruction/training directly affects their motivation. Many of these internal factors can be linked back to Keller's ARCS model of motivation (attention, relevance, confidence, and satisfaction), however, there are other factors as well, such as whether the design is clean, professional, and easy to navigate or not; whether the tasks are within their zone of proximal development (that which they are capable of accomplishing with limited support); whether the instruction has the right balance of academic learning time (ALT) (a ratio to describe how long is spent on activities as define by their complexity and ease of solution), and others.
  2. External - External factors are the environmental factors that affect motivation. The two main ones listed being technical support and organizational support. Students reported higher satisfaction with a course if they felt they got the proper training and received positive support when they had difficulties. The literature also briefly mentions that feeling overwhelmed between a school/work/home balance is also an external factor.
  3. Personal - Personal factors are all of the personal learner variables that affect one's motivation. For example, the learner's temperament, age, gender, and prior knowledge and experience when they begin the class. There is conflicting opinion in the literature on whether or not learning styles substantially affect learner motivation in online learning situations.
Summary of Study of Motivational Factors in SDEL
Using the knowledge they gained from reviewing the literature, Frick and Kim began their own study of factors affecting motivation in SDEL learning environments.

The research questions (p. 7):
  • Which factors best predict learner motivation in the beginning, during, and end of self-directed e-learning (SDEL)?
  • Does learner motivation change as he or she goes through instruction in SDEL?
  • What factors are related to learner motivational change during SDEL?
Method
The context
Frick and Kim emailed 800 learners at a major US e-learning company which provides SDEL courses for personal professional development, cooperations, and universities. The course formats are "stand-alone, typically 6-8 hours long, self-paced instruction delivered via the web" (p. 8) that focus on information technology skills and "soft skills development (e.g. coaching skills, consulting skills)" (p. 8). These courses typically have no instructor, but learners do have the option of paying an extra fee if they want instructional support added to their course.

Participants
Frick and Kim sought out 400 undergrad and graduate students and 400 working professionals of various backgrounds. 368 responded with an almost equal distribution of students and employees and almost equal distribution of gender. The greatest age population was the 25-34 range (42%), with almost an equal distribution of 24 & younger, 35-44, and 45 & up. A good majority of respondents reported using the internet more than 20 hours a week and 3-5 software programs on a regular basis (so they are pretty familiar with technology).

The Research Instrument
Frick and Kim gathered quantitative data using a self-reporting questionnaire consisting of 59 multiple choice (Likert scale) questions and one open ended question about their general feelings on SDEL.

Data Collection and Analysis
The questionnaire was sent out to participants via listservs and email and the researchers received a 46% response rate. All responses were kept anonymous.

Results
Researchers found that the best factors in predicting learner motivation was:
  1. Perceived relevance: How the learner perceived relevance affects their starting motivation, which in turn affects the during motivation, and finally the overall positive change in learner motivation throughout the course.
  2. Reported technology competence: The number of software programs used on a regular basis and time spent on the internet each week directly affects learner motivation.
It seemed that the other factors were not statistically relevant for predicting learner motivation throughout and at the end of the instruction.


Additional Comments:
I feel like I personally relate more to the online learning scenarios as a student. And since starting my grad certificate in the IST program, I have definitely experienced some of the factors found to negatively affect motivation in online learning (and I concur in their affect). For example on pg. 5, it is discussed that a poorly designed website and breaks in technology can lead to learner frustration, I circled both of those, because I've had instances where the Oncourse links were so convoluted, I had trouble keeping track of what assignments were due when. And the different links weren't consistent with one another. That is really frustrating! Also, one class I took, nearly once a week one of my classmates or I had to point out to the instructor that there was a broken link to our resources. This often led to a delay of retrieving the needed resource, adding frustration.

I also doubled circled the point about the challenges adult learners face trying to strike a balance between work, home, and course demands. I was glad to see that this is something that designers are (theoretically) taking into consideration for us non-traditional students.

But perhaps my biggest circle (underline and asterix!) was on p 7 while distinguishing between the online learning and SDEL: "In SDEL, it may not be easy to find student peers for interaction - whether positive or for commiseration." I know that I am a talker. I like to talk about my problems (some might say overtalk them). And I can think of at least one class where being able to commiserate with my peers about our frustrations about the class, the instructor, and the disorganization of it all is what kept me going in that class. I have never taken an SDEL course, but I can imagine this would be a major factor for me if I did, which it seems is also an issue with SDEL learners as exemplified by their responses in the I don't want to learn by myself items (p. 11).


W3: Effective Web Instruction

Ted Frick and Elizabeth Boling simplify the process of designing computer mediated learning in Effective Web Instruction: Handbook for Inquiry-Based Process.

This easy to read handbook concisely describes the essential steps for creating effective computer based training. It is split up into 6 main parts: Getting Started, Conducting the Analysis, Preparing for Testing a Prototype, Testing a Prototype, Building a Web Prototype, and Assessing and Maintaining the site.

This summary will cover the first two parts: Getting Started and Conducting the Analysis.

Getting Started
In Getting Started, Frick and Boling explain that to design an inquiry-based web instruction, one must follow an iterative process of making "empirical observations to answer questions and make decisions" (p 4), giving you a chance to revaluate and adjust the training as necessary throughout the design process.

Frick and Boling stress that a designer need not be an expert in all stages of the design process, as long as they hire good people for the design team whose strengths compliment their teammates' weaknesses. They list the most important members of a web-based instruction design team as: Analyst, Instructional Designer, Information, design, and graphic designer, Web Technology Specialist, Evaluation/usability testing specialist, and Web server & system administrator.

Conducting the analysis

Develop your Instructional Goals
Frick and Boling assert that the first thing to do when developing instructional goals is to determine who the "stakeholders" are. The stakeholders are the people most directly (and sometimes indirectly) affected by the training and it's outcome. The people who are most invested in the training and most affected by it's success (or failure). In the case of my project, the stakeholders are
  • me (as the Language Instruction Specialist of the department, it's my job to teach pedagogy principles and lead professional development),
  • my boss (because his reputation is affected by the work CeLCAR publishes) 
  • CeLCAR, the department I work for (part of our grant stipulates providing professional development oppurtunities for LCTL teachers and language developers, and every four years we have to reapply for funding)
  • US Department of Education/Title VI (because they provide the majority of our funding and evaluate our materials)
  • Indiana University/College of Arts and Sciences (because they provide partial funding and it is their name on the final project)
  • Language Developers (they will be using the training to help them develop language textbooks)
  • LCTL Teachers (because they will be using the finished product to teach their classes)
  • LCTL Students (because they will be using the textbooks to learn the languages)
For this project, I plan on focusing on the stakeholders I have immediate access to: my boss, CeLCAR, the language developers, LCTL Teachers, and LCTL students.

Form the Instructional goals and identify their indicators
Once you have determined the stakeholders, you can think about what you want the learners to be able to do at the end of the training. They go even one step forward and encourage a designer, while working on Instructional goals to think about how one can assess whether or not a learner acquired the desired ability/skill/knowledge. I especially enjoyed this discussion, because these concepts are mostly new to me (they are very different than what we discussed in my Testing courses, which focused more  on how to write good traditional tests). The examples given on the differences between "more or less efficient and more or less authentic" (p 11) were very useful at illustrating the different levels of efficiency and authenticity available. 

Obviously I wrote my instructional goals before thinking about mastery. And I am still unsure of how I am going to assess them. Which means, as I start thinking of mastery assessment (due next week), I expect I might have to review my instructional objectives as well. (It is an iterative process after all!)

Learner Analysis
Conducting a thorough learner analysis is important for determining motivation, predicting possible difficulties, planning learning, etc. In the case of my project, I am very familiar with my learner audience. I feel my biggest obstacles will include:
  • second language speakers
  • work full time (they could view this training as another drag on their time)
  • many developers/teachers of LCTL do not have much formal language education experience (be careful not to teach beyond their zone of proximal development)
I will have to continue working on a thorough learner analysis in order get a more complete view of my learner. (Their comfort with technology, etc)

Context Analysis
Frick and Boling go to lengths to describe that web-based instruction should never be designed as a just because you can choice. Instead, a designer should have sufficient justification for choosing a web-based context over a standard pen and paper and/or classroom delivery. 

In the case of my project, my justification is: 
  • time (instructors/developers can complete the training on their own and at their own speed. No meetings to attend that interfere with their other many duties)
  • money (we can't afford to host a multi-developer in-person training)
  • distance (many of our potential developers are located oversees, and they are unable to come attend in person)
I'm still not sure of the resources we will need, besides a computer and internet access. I will work on this more as the project develops.

Wednesday, May 22, 2013

Individual project

Project Title: Fundamentals of Designing and Developing Advanced Level Textbooks for Less Commonly Taught Languages (LCTLs)

Target Audience: Language teachers and/or language teaching materials developers. Men and women. Ages mid-20 to late 50s. Advanced degrees in language education and/or applied linguistics. International (some living in the US and some living abroad). Primarily non-native English speakers.

Description of Problem: CeLCAR, is a title VI, federally funded Language Resource Center that creates learning materials for teaching the languages and cultures of the Central Asian region. In the past, language developers have worked independently and without much guidance for developing language textbooks. Recently, new developers have used previously developed textbooks as a guide for developing new textbooks, as well as consulting with an on-staff language pedagogist. However, this one-on-one/face-to-face training is not as organized and efficient as it could be. Also, it is especially ineffective for working with developers living overseas.

By creating a CBT course to teach the fundamentals of designing and developing advanced level textbooks for LCTL developers, the center will save time during the development process. Additionally, by reminding the developers of the foundations of language education, the content and effectiveness of the materials will be strengthened. And finally, creating a CBT will allow non-local developers access and benefit from the training as well.

Instructional Objectives:
1. Without references, be able to name (write) and define (write) the 5 Cs of Foreign Language Standards.
2. Given 10 examples of textbook tasks/activities/exercises, be able to correctly identify (label) with 100% accuracy which of the 5 Cs of Foreign Language Standards is being exemplified (if any).
3. Without references, be able to identify (circle):
       a. a task.
       b. an activity
       b. an exercise.
4. When provided with the ACTFL Proficiency Guideline*, be able to correctly identify (circle) three examples of Advanced level tasks/activities/exercises.
5. When provided with the IRL Scale, be able to correctly identify (circle) three examples of Level 3 tasks/activities/exercises.
6. Without references, be able to explain (write) the differences between the purpose of the ACTFL Proficiency Guidelines and the IRL Scale.
7. Without references, be able to define (write) the Communicative Language Teaching Approach and list (write) the five features.
8. Provided 5 themes and 5 topics, be able to discriminate (label) between a topic and a theme.
8. Given a potential textbook theme, be able to name (write) ten example topics related to the theme.
9. When given an example topic, be able to provide an example (write) of applying concentric design.
10. Given a sample Scope and Sequence, be able to identify (label) the parts.
11. Using all references provided in the training, create (write) a skeleton Scope and Sequence for an Advanced LCTL textbook.

* Communication, Culture, Comparisons, Communities, and Connections
** American Council on Teaching Foreign Languages
*** Interagency Language Roundtable

W2: Mager's Instructional Objectives.

The first time I read Dr. Mager's Preparing Instructional Objectives: A critical tool in the development of effective instruction was last summer, in my R521 class.  I had experience writing learning objectives through my education and experience as an English teacher, however, I had never experienced writing such technical and specific instructional objectives.

Summary

The book covers how to write instructional objectives, but it also addresses why clearly stated objectives are so important and provides opportunities for learners to identify components and practice writing their own.

According to Mager, all instructional objectives should contain an audience (the who), a behavior (what the learner is expected to do), a condition (a context, a situation, etc), and a degree (quality of performance the learner must achieve, time limits for completion, etc.).

My favorite part of the book is probably the section on Gibberish (pg 142-143). Being in the field of education, I'm used to people relying on gibberish and holding it in a weird esteem. I'm much more fond of the short, simple, and say what you really mean approach that Mager endorses.

What I find most difficult about writing Mager-style instructional objectives is coming up with the degrees that meet his criteria. Because most of my trainings are professional developments for teachers and/or computer assisted language learning related, I find it very difficult to come up with meaningful, accurate, and effective measurement criteria for the objectives.

The aspect of Mager-style objectives I found most superfluous is the audience. I don't really see the point of saying "The student will be able to" over and over when by the nature of trainings and writing objectives it is directly implied that it is the student that will be able to demonstrate the ability. It's not like without directly stating this that one could get confused and think it was someone other than the student. It would seem to me that one should be able to define their audience before stating the objectives, and then the audience for all objectives should just be assumed to be the students defined in the audience.

W2: Merrill's Five-Star Rating Scale

(Sorry for the delayed postings this week. I'm recovering from a mean stomach bug. Yuck!)

Summary of 5 Star Instructional Design Rating by M. David Merrill

Merrill's 5-Star Instructional Design Rating is an evaluation tool for rating courseware according to the First Principles of Instructional Design, using a three star rating method (bronze, silver, and gold).


For each of the first principles identified, task-centered approach, activation, demonstration, application, and integration (Merrill et al, 2008), Merrill provides three questions to be used for evaluating how well the principle is being addressed within the instruction. It is not clearly stated, however, one can deduce that if all questions are "yes" than that principle receives a gold star rating. If two of three questions are "yes" than the principle receives a silver star. And of course, if only one question is "yes" than it receives a Bronze. Presumably, if all questions are "no", then no star is awarded.


My biggest issue with this rating system is that it's too subjective. I would think it is a fine tool for Merrill to use, because he knows exactly what his own criteria is for the questions. However, I don't think if you tested it for reliability between a set of different raters that there would be a consistent rating between the raters.


For example: 3. b. (2) Multiple representations are used for the demonstrations? Are two representations sufficient? Or three? Or four? What constitutes multiple? Or 3. c. Is media relevant to the content and used to enhance learning. What would be considered relevant to one evaluator might not be equally relevant to a rater who has higher standards/more experience/different expectations to another rater.


So while it seems like a good start, I really feel that if this is going to be tool used universally for evaluating instruction, more work needs to go into developing descriptive parameters to improve reliability between raters. (I haven't checked my peers' evaluations yet, but if two of us evaluated the same instruction, it would be interesting to comparing our ratings in order to examine the 5-star rating reliability.)


Speaking of reviewing: most computer mediated learning that I'm familiar with fall into the receptive ("spray and pray") or exploratory ("sink-or-swim") categories, so I decided to evaluate one of the examples that Dr. Myers provided us. I chose the Understanding Creditor Statements by Wendy Baez (2006).



Is the instructional architecture tutorial or experiential? Yes. The course is a tutorial for understanding credit card terms, statements, and finance charges. I would say it does go beyond receptive or exploratory.
Is the courseware TELL-&-ASK (T&A) instruction? Here is where I'm on the fence. Yes, this courseware provides multiple choice/short answer questions as part of the individual modules (with immediate feedback), which seems to fit the description of Tell&Ask. And, it also has a final assessment, which would seem to not be T&A. But then again, the final assessment seems to also just be a multiple choice test. So how is that different than being T&A? The learner doesn't have to produce anything as part of demonstrating understanding.

I'm going to assume, since this was provided as a sample to review that Dr. Myers does not consider this T&A, thus making it inappropriate for this 5-star rating. However, I think I will need more discussion to understand why these questions are not considered just Tell&Ask.

a. Does the courseware show learners the task they will be able to do or the problem they will be able to solve as a result of completing a module or course? b. Are students engaged at the problem or task level not just the operation or action levels? c. Does the courseware involve a progression of problems rather than a single problem?
1. Is the courseware presented in the context of real world problems? No. The modules clearly states the learning objects and sometimes uses examples as part of the lesson. However, they do not begin the instruction with a demonstration/authentic example of what the learner will be able to do as a result of the training.  Yes. I find the practice engaging. It could be improved by asking learners to practice on their own bank statements (for example for Lesson 2: Locating Statement Information especially), however there are logistical reasons why that probably wouldn't work a t this time. Yes. Each module has questions and/or practice sections. And then there is a final assessment at the very end of the course.



a. Does the courseware direct learners to recall, relate, describe, or apply knowledge from relevant past experience that can be used as a foundation for new knowledge? b. Does the courseware provide relevant experience that can be used as a foundation for the new knowledge? c. If learners already know some of the content are they given an opportunity to demonstrate their knowledge?
2. Does the courseware attempt to activate relevant prior knowledge or experience? Yes. The very first thing the course does in Lesson 1 is show a quote and say "Do you recognize the previous statement?" And throughout the course, there is reference to the learners experience and own personal statements. Yes. I feel like the use of authentic statements and calculators helps build a firm foundation for applying the new knowledge. No. I do not see where the learner can demonstrate prior knowledge. The course seems to focus on the new information being taught.

a. Are the demonstrations (examples) consistent with the content being taught? (1) Examples and non-examples for concepts? (2) Demonstrations for procedures? (3) Visualizations for processes? (4) Modeling for behavior? b. Are at least some of the following learner guidance techniques employed? (1) Learners are directed to relevant information? (2) Multiple representations are used for the demonstrations? (3) Multiple demonstrations are explicitly compared? c. Is media relevant to the content and used to enhance learning?
3. Does the courseware demonstrate (show examples) of what is to be learned rather than merely tell information about what is to be learned? Yes. The examples used (interactive statements, statements, calculators, case studies, etc) and good examples and useful for the learner to understand the concepts and visualizations. Yes and no. There are multiple representations (examples) used, however I don't feel they are explicitly compared. And there really isn't any redirecting to relevant information (unless you count linking the the glossary page as redirecting to relevant information). This really depends on one's definition of media. The course does not use music or video, but it does use pictures. And it even has a picture with roll over features on it for a more interactive experience. So, I guess I will give it a Yes, mostly due to the interactive statement.

a. Are the application (practice) and the posttest consistent with the stated or implied objectives? (1) Information-about practice requires learners to recall or recognize information. (2) Parts-of practice requires the learners to locate, name, and/or describe each part. (3) Kinds-of practice requires learners to identify new examples of each kind. (4) How-to practice requires learners to do the procedure. (5) What-happens practice requires learners to predict a consequence of a process given conditions, or to find faulted conditions given an unexpected consequence. b. Does the courseware require learners to use new knowledge or skill to solve a varied sequence of problems and do learners receive corrective feedback on their performance? c. In most application or practice activities, are learners able to access context sensitive help or guidance when having difficulty with the instructional materials? Is this coaching gradually diminished as the instruction progresses?
4. Do learners have an opportunity to practice and apply their newly acquired knowledge or skill? Yes. Module 2 and Module 3 especially have good practice sections that use different kinds of practice to support the state objectives and require the leaner to demonstrate understanding in order to answer the questions correctly. Yes. Learners have to solve a variety of problems and are provided immediate feedback. Yes and no. There is no "Help" section or place for the learner to directly go for help. However, new terms are hyperlinked throughout the course, linking back to definitions and/or explanations.

a. Does the courseware provide an opportunity for learners to publicly demonstrate their new knowledge or skill? b. Does the courseware provide an opportunity for learners to reflect-on, discuss, and defend their new knowledge or skill? c. Does the courseware provide an opportunity for learners to create, invent, or explore new and new knowledge or skill?
5. Does the courseware provide techniques that encourage learners to integrate (transfer) the new knowledge or skill into their everyday life? No. There is an assessment the learner can take at the end. It is through a 3rd party website (but the link is not active). However, I wouldn't consider this publicly demonstrating knowledge. No. The questions and practice do not seem to lead themselves for discussion, reflection, or collaboration. No. No star.
Overall rating:

REFERENCE


Merrill, M. D., Barclay, M., & Schaak, A. v. (2008). Prescriptive Principles for Instructional Design. In AECT Handbook (pp. 173-184).

Saturday, May 11, 2013

Week 1: Prescriptive Principles for Instructional Design

Summary of Prescriptive Principles for Instructional Design by M. David Merrill, Matthew Barclay, and Andrew van Schaak

First Principles

Merrill, Barclay, and van Schaak introduce the First Principles of Instruction that they have come up with after examining many instructional design theories and models and looking for the underlying prescriptive principles common among almost all of the theories and models.

These First Principles are TADAI:
http://2.bp.blogspot.com

  • Task-centered problem (learning through doing)
  • Activation (accessing prior knowledge)
  • Demonstration (demonstrating and examples)
  • Application (practice WITH feedback)
  • Integration (reflection, discussion, etc...constructivist learning)
Merrill et al have determined that all ISD models follow these same principles; however, not all models use all the the principles. They theorize that there is a direct correlation between the effectiveness of an instruction and the number of principles they employ. A study done by Merrill and Thompson with NETg supported this theory, because the group trained using the First Principles performed notably higher than the group that only received demonstration and significantly higher than the control group that received no instruction. In addition to the difference in performance (both on mastery of skills and time took to complete the assessment), student and instructor feedback also supported the hypothesis because the First Principles group reported the highest feelings of satisfaction with the instruction.

Other Instructional Design Principles

Merrill et al then introduced several ID models (Clark & Mayer's Principle for Multimedia and E-Learning, Vad der Meij's Minimalist Principles, Foshay et al's Cognitive Training Model, Seidel et al's Instruction Principles based on Learning Principles, and van Merrienboer's 4C/ID model) and then showed via comparison tables how they perceived the model's principles to align with the identified First Principles. 

http://people.senecac.on.ca/
Designing Task-Center Instruction

Finally, Merrill et al suggests an approach for designing instruction that incorporates all of the First Principles, Merrill's Pebble-in-the-Pond approach. And then provides a table that illustrates the approach, using general explanations for each step and how the steps interact with one another


My response

Overall, I found that I accepted Merrill et al's theory of First Principles; however, sometimes the broadness/generality of the principles seemed too vague to me. For example, one of the First Principles is Application Principle; part of its explanation is that the learners will "receive intrinsic or corrective feedback" (p. 175), but it doesn't specify timeliness of feedback. For the most part, I have accepted good feedback to mean timely feedback. And many would say that timely feedback equals instant feedback (especially when it comes to online learning). However Allen's eLearning principles state "Delay judgement: if learners have to wait for confirmation, they will typically reevaluate for themselves while the tension mounts - essentially reviewing and rehearsing" (p. 179). Merrill et al identify this as applying the Application Principle, so I'm interested to see how they could explain that the same principle would be also applied for van Meji's "Provide on-the-spot error information" (p. 179). They list both as examples of Application Principle. Do they not care how/when feedback is given, only that it is given? This seems odd to me.

Secondly, I had a hard time distinguishing in my mind the difference between Task-centered Approach and Application Principle, because I typically think of the practice you do while completing a task-based activity (learning while doing) is the application (practice). I'm not sure if they are saying that the task-centered goal is the general approach to the design (major objective: change the oil in a car) and to achieve this you must provide application practice with feedback for the steps to achieve the overall task/learning  (the individual learning objectives/steps needed to complete to successfully change the oil). Or if it is just saying that the task-centered portion is only during the actual active learning (content) and the application is only during the practice portion after the content portion. I may need a little more discussion to understand the difference between these two.

This is the first time I've encountered Merrill's Pebble-in-the-Pond approach, and I'm not really sure I understand it. I will try to summarize it using a language learning example:
  1. Whole task/Identify a whole task - Have a conversation about age and birthdays in X language (beginner level).
  2. Progression/Identify series of subtasks needed to achieve whole task - How to ask and give you age, how to ask and give a date
  3. Components/Identify components to achieve subtasks - How to ask and give you age: appropriate grammar and syntax (verb choice and conjugation: I am # years vs. I have # years, modification for case of noun and number adjective, etc), cultural issues (is it impolite to ask an age? do people generally lie and/or not know their age?); how to ask and give a date (month vocabulary, how to express years, order of date mm/dd/yy), cultural issues (not know their birthdate)
  4. Strategy/Specify an instructional strategy - modeling (show a video), scaffolding (teacher/student practice), cooperative learning (student/student practice), etc
  5. Interface/Specify the user interface - ?? Online vs. classroom? Small group vs. pair work? Identify multimedia.  Or is this approach only for elearning??
  6. Production/Produce the course - self explanatory.
Comments on how I applied this appreciated!

A few other comments as I read: 
  • While overall I like Clark & Mayer's Principles for Multimedia Learning, I was surprised that they advocate "students learn better from animation and narration than from animation and narration and text (p. 147)" (p. 178). While this may be true, I would be wary of designing an instruction that didn't provide text as well as narration because of accessibility for hearing impaired learners. I am very familiar with adding alt descriptions for visual items in a program to be read by screenreaders for the visual impaired, however, I don't know what the alternative is for hearing impaired people if you do not provide text somewhere. 
  • I was also interested why van Meij says to "Prevent mistakes whenever possible" (p. 179) as part of the Minimalist Principles. In language learning, we don't discourage mistakes, because they are most often learning oppurtunities. Often times making a mistake and then having to correct oneself for comprehension makes a bigger impact on language acquisition than direct teaching. (Obviously you have to be wary of repeated mistakes without self-corrections however.)
A random criticisms:
  • Did anyone else notice they have two Table 14.1, 14.2, 14.3, and 14.4? I get that they were in different sections of the chapter, but they should have either carried them forward in the chapter as 14.6 and beyond or done something to differentiate between the chapter sections like 14-1.1 14-2.1 etc. JMO. :)
  • Why didn't they follow Clark & Mayer's principle of "Students learn better when corresponding words and pictures (tables!) are presented near rather than far from each other on the page or screen" (p. 177). Small criticism, but I hate having to flip back and forth between pages with the paragraph isn't on the same page as the Table to which it refers!

Thursday, May 9, 2013

Hello World!


Hi everyone! I'm Amber. I actually live in Bloomington, IN, but I am working on my IST Certificate via distance learning because I am a working mother of 2 (twin boys turning 6 in June). It's much easier to go "back to school" in the comfort of my home office while my kiddos are sleeping in the nest room than to attend classes in the flesh.

I currently work full time at IU as an academic specialist developing language materials and curriculums.  My specialty is language learning technology.

I decided that I wanted to pursue a certificate in IST for two reasons: #1 - to improve my productivity and effectiveness in my current position. And #2 - to broaden my qualifications from the world of language learning technology to more generally instructional technology and design so that I will be more competitive in the job market.

The main reason I'm interested in IST is because I am a "let's not mess around...do it right the first time and then let's all go home" type person. When I'm at work, that is time away from my family. I want all time not being spent with my family to be effective and efficient, because I want to feel like I'm away from them for a purpose. I cannot stand disorganization and inefficiency in the workplace. It's a waste of the company's money, but I also feel like it's a waste of my time.

I like IST/ISD because it is a systematic attack that finds what isn't working, proposes solutions, and applies them! Then, perhaps my favorite part: EVALUATES whether or not the changes are actually making improvements! This part is so important and so often neglected (at least in academia)!

I feel like I have a good foundation of IST applications through my working experience, but I'm looking for the theoretical education and training, and of course, accreditation via a certificate.
 
An interesting thing about me is that I'm taking this class with my dad! He graduated with an MS in IST from IU in May 2010, and he is considering applying for the EdD, so this class is him testing the waters.