Seems a useful tool … xtranormal. I’m sure I can think of a few ideas for it.
They both make valuable points; I know that I am definitely someone who reads ahead of the presenter. I think there are infact two things that we have to consider.
There are some overlaps – the points that both raise about the fact that most audience members read ahead, while the presenter is discussing particular points. So, Kulhmann’s example of the way cells work for mobile phones is good. That’s something that it’s quite hard to get over in text – an animation makes it much easier to understand. I know that I’m guilty of poor powerpoint usage. I often use the bullet points – and then expand on them. Finding relevant images / animations isn’t easy. There is also the thought that students want to have the notes [aka key points] of the lecture for reference/ to catch up etc. (Indeed, I have just requested a set of slides for a lecture that I missed). Should the Powerpoint slides really serve as a summary of the lecture, or should they be something else – to trigger the imagination – to get students to start to create ideas or whatever.
The point, however, that both raise about the fact that most people read faster than they can listen doesn’t apply in the same way in a face to face setting. You can’t fast forward the lecturer though you can, in most cases, press the “pause” button to request further clarification – something that isn’t as easy in an online (asynchronous) lecture.
Online students have slightly different needs. They can’t use the “pause” feature of a live lecture – but they also have (assuming it’s given to them) the option to fast forward. Both Moore and Kulhmann point out the difficulties of not allowing that option.
Equally, as several of their commenters have noted, there are accessibility requirements that mean that just audio with out the transcript isn’t appropriate (nor, for that matter, should text without audio be appropriate. Not everyone finds reading easy).
I’m not sure that I know what the answer is, I do know, however, that they have given some really good examples of using audio effectively, and it’s something that I need to really look at.
Via: Stephen Downes
EveryZing – attempts to analyse the audio in online audio and video to enable searching (Technical details are outlined in SpeechTechMag). I’ve just tried searching “News” for “Peter Tobin” – who has cropped up a lot in the UK news in the past few days. I didn’t get any hits, though when I extended this to “All sources” I got several YouTube videos (including several from UK based news agencies). I guess that it’s predominantly the North American (US?) news channels that it searches. Blinkx gave me quite a few more hits.
I’ve been using Talkr for a while now, to create audio podcasts of my blog posting. From Scott’s blog, I discovered xFruits, which he’s using to create a pdf of his rss feed. I’ve just managed to do the same, though it took some time, as I wasn’t sure which RSS feed it wanted -and it seemed to be quite fussy (the atom one satisfied it) xFruits have a range of services, including an audio generating one. After quite a few false starts, I’ve managed to create one, and after a while I’ve discovered how I think that I can listen to it. As far as I can tell, I have to go to VocalFruits – and sign in. The voice is better than the talkr one, the drawback – probably related to the quality of the voice, is the fact that I can only have 100 free listens. I’ve used up a few already testing it. If I want to use it more, I have to pay 35 a month (for up to 1,000 listens). Guess I’ll stick with Talkr! (The .pdf creation would appear to be free).
I’ve had this bookmarked for a while now, Ma222 Statistical Analysis – it’s a blog (now a little static) in which the lecturer has used a tablet PC, captured what’s going on and uploaded it to Blip TV to demonstrate aspects of stats.
Nice and simple – and in several ways quite similar to Alan Cann’s videos for Biologists.
Vicki Davis gives a good overview of downloading, editing and then, crucially, ideas for how one might cite a section taken from Google video – as it’s something that the citation guides haven’t really taken on board … yet.
The video that she has about citing, uses CitationMachine, something that David Warlick has helped to develop. I’ve had a look, but as it doesn’t have Harvard as one of the citation systems, I’m not sure how useful it will be. I’ve looked at Zotero – the Firefox plugin that I’m using, and that has options for Video/ TV/ Radio so that should allow the same sort of data. Zotero doesn’t automatically output as Harvard – but I can output the data in a form that EndNote can import. (Which I can’t test immediately with a record for a video, as I’ve not got EndNote on this PC).