Access2OER talk:OER exchange
- 1 Miro and bittorrent
- 2 Cascading download and exchange
- 3 Would stacking content onto CDROMs work?
- 4 Another response
- 5 VMukti
- 6 Mobile access to the major OER repositories via text-to-speech and/or telephony
1 Miro and bittorrent
This is Tiffiniy Cheng from the Participatory Culture Foundation in the US (we make open source video distribution software and will be making an open version of iTunes U). The issue of access is extremely valuable since if we can solve such a singular issue, so many more people could share in the benefits of technology.
I'd like to hear what people think about the viability of technologies like bit torrent in countries with low bandwidth. I actually live in a rural area of Massachusetts in the US and my internet connection is spotty and hardly surpasses dial up speeds -- bit torrent has seemed like a good alternative for me. Files download in the background so that I can come back to it when the download is finished -- this makes it so that I have a delay in access but I have access nevertheless (miro is one example of bit torrent video distribution).
I love the infrastructure funding model Chris has proposed. Perhaps the only hangup to this kind of strategy is foundations, donors don't want to fund something this pricey and they feel is a utility and government or private businesses should fund. Also, the case for bringing faster internet to countries without it hasn't yet been made on a large scale, my own country notwithstanding.
Lastly, from my experience of other countries with low bandwidth, some areas do have faster connections. I'd like to see a community portal/gateway set up in a public space where people can go to upload courseware and interact with it while others can see a stream of courseware on another screen.
1.1 A comment
|Quote image||It was very interesting to read a mail from you on how you are making an open version of iTunes using open source software.
What is the name or version of open software are using to make the open version of iTunes? or what programming language of open source are using to make the open version of iTune?
The software is Miro, see
1.2 More about PCF / Miro / Open Video Alliance
PCF/Miro is a part of the Open Video Alliance (http://www.openvideoalliance.org) and we will be having our first conference on June 19th & 20th @ NYU Law in the US: http://www.getmiro.com/blog/2009/02/open-video-conference-june-19th-20th-in-new-york/. This conference will kick off an open video agenda that actively advances the public interest in new technologies and video policies around the world. I'd like to see hybrid solutions come out of our discussions.
In addition, this conference will be a chance for us to discuss open education initiatives. PCF/Miro is a part of an open education initiative aimed at addressing issues with distributing open education materials beyond university walls. We have a proposal for an open alternative to iTunes U, called Open U, which allows those without propietary software, hardware, and fast connections be able to download open courseware (our video aggregator/player Miro uses bitTorrent technology and makes it easier in some ways for low bandwidth connections to download materials). We want to have a mailing list for everyone to discuss what this open alternative should actually look like (unfortunately, it won't be set up before this list closes), until then, please feel free to submit your ideas on the talk section of our current proposal here: http://www.opencastproject.org/project/open_u. One of my future visions for Open U is having a central community portal with courseware automatically downloaded and usable by anyone who can physically get to the portal.
Just to give you a quick overview of the conference, we're bringing together technologists, video makers, academics, lawyers, entrepreneurs, etc. to frame the Open Video movement and debate. Many of the discussions happening on this list are issues that need to be addressed -- please think of this conference as a way to solve some of the issues we have discussed and send in your ideas for discussions. There is an open call for submissions to the conference and can be posted here: http://openvideoalliance.org/proposals/
We believe that connecting the right technological elements together is only part of the battle; also important is having compelling stories from creators, scholars, and other non-techies that help illustrate why people should care about all this stuff. The idea is to get smart and charismatic people, as well as respected organizations, to collectively build a narrative. Our goal is to make the conference an exciting and broad-based event with inspirational talks, cool video art, film screenings, and more.
We expect to get a huge variety of folks to be part of this larger conference: open education publishers, video remix artists, filmmakers, academics, etc.
- Organizations Involved*
Currently, we've got the following organizations officially supporting the conference (the list will be expanding rapidly over the next few weeks):
Co-Organizers: PCF, Yale ISP, Kaltura, iCommons
Official Partners: Berkman, Creative Commons
1.3 More about Miro
Thanks for the interest -- our video distribution system can be found at http://www.getmiro.com. We will be making an Open courseware section in our guide very soon (volunteer moderators from around the world are already hard at work on it).
We have been discussing bandwidth management issues quiet extensively. THis kind of bandwidth management suggested by Bjoern is partially completed in Miro in that we have the bit torrent and resume download abilities, as well as multiple file format awareness and playback (there's always more to do on this front as well). We would like to embark on figuring out network connections -- this in my view is extremely important for countries all over the world. I've discussed this proposal with our development team and it seems it would take a fair amount of work, as you can imagine what it would take for a program to continuously check for the network connection and then following a download algorithm based on that.
If this is a solution that this community and others believe is important to creating broad access to content in developing countries, we would love to be a part of the team to do it. Let's follow Bjoern with this initiative and flesh out some requirements that work for your country!
2 Cascading download and exchange
"OER downloading and exchange system". That is to say "What if you could download OER materials easily, without bandwidth problems?"
There's a proposal right at the bottom of the email, so please do keep reading, and do put forward your thoughts, as to whether a solution like this would be desirable/helpful/would work, etc!
I'll start with some thoughts about Miro/bittorrent/podcasts:
2.1 Some thoughts about Miro/bittorrent/podcasts
- ABILITY TO DOWNLOAD SLOWLY. What is very valuable about the idea of using 'bittorrent' is the possibility to download, over longer periods of time, to get a resource. The same applies to podcasts: You can subscribe to a small file (the podcast rss file), and then wait for the resources to download.
- ABILITY TO RESUME. What's also valuable is that there may be an application (such as Miro, or the commercial iTunes, or another podcatcher/download manager), that gradually downloads te resource. Moreover, you can quit the program, and resume it some other time.
- NETWORK AWARENESS. What's not so good is that a typical podcatcher isn't aware of the network conditions: In an unmanaged network, it will take up whatever bandwidth it can. At the moment, many university networks are unusable, because of (illegal) file sharing activities. While downloading educational resource is of coure a better use of bandwidht, it would nevertheless make the network unusable for essential services like email, unless it was managed properly.
- "NETWORK PROXIMITY". Also, a typical podcatcher is now aware of the fact that somebody on my local network has just downloaded the same file. Comparing the notion of 'podcasting' with 'bittorrent'. For podcasting, only a 'downlink' is required. The 'bittorrent' (at least as a concept) also requires an 'uplink' which often is a lot smaller than the 'downlink' and could have a negative impact on the institution. However, the 'bittorrent' does have the advantage that in principle it knows about 'other copies' of the same file, that may be "nearby" in the sense that there is a good connection.
- FILE FORMATS. At the moment, podcatchers are 'multi-media' centric, which of course is why they were developed (i.e. for podcasts). At least iTunes only accepts a range of audi/video/pdf files, and nothing else. Download managers download anything (but are less aware of podcast feeds, though there may be hybrids.)
So in my view, what's needed is a 'cascading download' mechanism, that knows about 'nearby' files, and is aware of your bandwidth. Here's a potental scenario:
2.2 The Scenario
Suppose your in Zambia, and you use an application like Miro to subscribe to a feed, say the Participatory Culture Foundation podcast. Normally, the connection would be made straight to the PCF server, and would put immediate strain on your network, stopping others from browsing the web, or doing email. However, with our new and improved download system "super miro", the subscription doesn't go straight to the PCF server, but it goes to a local server at the school, then via a national Zambian school gateway (run by the NREN, providing an internet exchange point for Zambian schools and Universities), and only then goes to the PCF server.
Also, the 'network' talks back to your "super miro" application, so that "super miro" also doesn't take up the whole bandwidth available, but knows about available bandwidth, and restricts itself accordingly. The user then gets feedback:
This high-resolution video download will take a long time to download (approximately 18 hours). Would you like to download a low-resolution video preview (approximately 4 hours) or a low-resolution audio preview with some keyframes (1 hour) instead? Please choose: [Get the high resolution video] [Get the audio/image preview and notify me when the high res is ready] [just get the low-res video preview]
So the user has the choice of getting lower resolution version (which needn't be provided by the PCF podcast itself, but could be generated elsewhere), with the option of waiting for the the high-res video as well. The user chooses audio/image preview, and has the file in an hour. When the user has watched this, "super miro" says: "A higher res version is available - do you wish to download it?" If the use proceeds, then in a day or so, they get an email, notifying them that the high-res file is available on their school server, for immediate viewing.
When the video has downloaded to the user, a copy is kept by the above chain of servers: The school server, the Zambian school gateway server, and perhaps another African internet exchange point outside Zambia. So others requesting the same file don't need to go back to the PCF podcast server to get the file. (However, every time a file is requested, from any of those servers, the PCF podcast server gets a 'ping' so that they have good statistics about how their media is being used.)
The same mechanism wouldn't just work for video/audio files, but would also work for Open CourseWare, for content packages, etc. For content packages (provided as zip files), "super miro" would be able to look inside the content package, and (just like the video file) the user would have the option of downloading lower bandwidth version of the materials first, before downloading the whole content package. (That is to say, the content package can be downloaded in pieces, and then be reassembled by "super miro" on the user side.)
The system would not just work online, but it would also be possible to "prime" the system with content packages downloaded elsewhere. So a Zambian school server would not need to be on the internet. Teachers would be able to request content packages by from the national Zambian school server, that would be mailed to them by DVD / memory stick / hard drive. Those content packages are installed on the Zambian school server, and become available to teachers "as if" the server was on the internet.
Also, the system is bi-directional: Content produced by the Zambia school is uploaded to their school server, but automatically mirrored to the national Zambian server, and perhaps to the server near the African internet exchange point. When somebody from the PCF wants to get a learning resource from the Zambian school, they don't put any strain on the Zambian school network, but the content just comes from a server near the African internet exchange point.
2.3 The proposal
So that outlines the scenario. Because this week is about proposals, here's the proposal:
- Form a small consortium including key stakeholders (such as content providers, NRENs, and content users, ...)
- Do some action research to see whether the above system would be usable and acceptable by schools and educational institutions in the developing world
- Develop guidelines for content providers (how to make their resources automatically downloadable by "super miro")
- Iteratively develop/deploy/test a "cascading download system", both on the server side, as well as for clients
- The "cascading download system" may need to include on-the-fly content transcoding and transformation systems
- Study the impact.
Of course the above proposal isn't completely taken out of thin air: Cliff's work with the eGranary, Theo's work with the Global Grid for Learning, as well as various ideas around the OCWC and OpenCast communities, already do at least some of this, but it would be good to try to tie some of this together globally.
3 Would stacking content onto CDROMs work?
|Quote image||We built up a considerable legacy in UK education on CDROM based titles just as DVD and online took off, slowing down the transition. I have seen that elsewhere.
The POTENTIAL exists for developing world to do a step change in performance and leap frog some of the developed world. The balance between TV/PC/Mobile/Games/WiMAX will vary considerably.
What would concern me is if we assume that we must go down an offline route that we build inertia and barriers downstream in exchange for short term progress.
|Quote image||To me, it's critical that a system would not just work online and offline, but that the transition between these two modes would be fluid. Offline content is probably going to be a reality, but this doesn't mean a rack of CDROM. Rather, it might mean that it would also be possible to "prime" the system with content packages downloaded elsewhere. In the above example, a Zambian school server would not need to be on the internet. Teachers would be able to request content packages by from the national Zambian school server, that would be mailed to them by DVD / memory stick / hard drive. Those content packages are installed on the Zambian school server, and become available to teachers "as if" the server was on the internet.
This may be a little ambitous, but if one did this smartly, it might even mean that communication/collaborative features around a particular piece of content or website are maintained, while the content itself is is available offline. Thus using whatever bandwidth is available for processes that cannot be done offline.
4 Another response
|Quote image||So I suggest that the distinction we need to make is not geographic, but economic. We should be asking, "what are the best technologies to deliver information and communication to the disenfranchised people of (fill in the blank)", rather than "what are the best technologies for 'Africa'."
The hybrid technologies we're exploring in this forum are things can make the greatest impact today. Off-line information storage so that teachers have instant access to a wealth of video and audio and high-bandwidth multimedia. Low-bandwidth updates and messaging using self-healing, asyncronous, inexpensive connectivity (like the satellite broadcasts we've experimented with that serve an entire continent.) And then highly managed, intermittent real-time connections for those times when nothing else will serve.
Video Web Conferencing Technology: Introducing VMukti.com
VMukti.com has a software application that is described as having the ability to utilize a low band-width server usage in Video Web Conferencing... http://vmukti.com and is a free open sourced application that can be downloaded at VMukti.com
This supports the use of VMukti Video Web Conferencing applications with documentation. VMukti is leading in emerging technology and seeing believes along with faith based on knowing their technicians are the best at discovery and innovation. 1. VMukti.com 2. Google Code
- Project Description 5
VMukti (formerly known as 1videoConference) from VMukti Solutions Pvt. Ltd. is a WPF/WCF Unified Social Community engine. VMukti is multi-point unified communications, collaboration, and conferencing social community platform with built-in support for access to platform features through:
VMukti is an open source multi-point unified extensible collaborative conferencing engine, with built in support for access to platform features through:
a) Personalized, Mash able RIA Web-interface b) Light Widgets for 3rd party sites c) Desktops d) PSTN e) Mobile and f) IP phones – developed in .NET 3.5 Tech.
all use very low Bandwidth / CPU / Memory usage. 6As [to] date there is a huge dearth of social community engines that offer good quality unified communications with multipoint conferencing and social collaboration plug-ins for 3rd party websites. Neither are there products that converge the power of Web, P2P and Telecom – enabling web, phone, mobile or IP phone users to collaborate. VMukti scores over USE-AS-IS systems that are non-customizable and non-scalable by providing P2P modular customizable platforms to clients. It is proud to promote the emergent open source culture. VMukti also tides over the problem of High costs owing to large server / bandwidth requirements as well as low quality output of collaboration platforms.
Web2.0, distributed, peer-to-peer, grid computing, unified communications, and has an SAAS platform for web, phone, IM rich media, collaboration, and conference. This Multipoint VoIP and VVoIP Video service delivery platform based on C#, WPF, WCF, and .NET 3.5.
This VoIP and VVoIP technology for Business, Government, Education, Healthcare, and Community based on WinFX, XAML, and .NET 3.5 and supports Asterisk.
VMukti architecture is modular, more like Joomla type content management system. You can mix and match modules or create your own modules and create your own Video Services like:
- Consumer social networking video / audio / text and presence messenger
- P2P Video conferencing services
- Multi-party web conferencing and recording
- Video over Internet Streaming / broadcasting services
- Multi-point video based e-learning solution through integration with Moodle
- Generic Instant messaging client through integration GAIM
- P2P video sharing
- SIP based Video PBX services through integration with Asterisk
- Multi-point remote desktop monitoring
- Multi-point remote desktop controlling
- Co-authoring services
- Other related technologies suffer with major issues around Video quality due to their client - server architecture. VMukti gets around the issue by implementing P2P architecture.
- VMukti offers a web-based, as well as, a desktop application and works seamlessly around the firewall and NAT traversal issues through popular "Hole Punching" technique.
Hardik (VMukti CEO) wrote May 21 2007 at 2:45 PM Thanks for the compliments...Theoretically, VMukti can support 10,000 [end-users] as there is no load on the server :) but we need somebody to test it and give us feedbacks on max users. Thanks Hardik …
- Demo, Videos & More details: VMukti.com
5 VMukti.com, October 2008, VMukti – Unified Communications Social Activity, Internet: http://www.codeplex.com/vmukti 6 VMukti.com, May 2007, Enabler of Web 2.0 Audio/ Video & Presence Collaboration for 3rd party domain, Internet: http://code.google.com/p/vmukti
6 Mobile access to the major OER repositories via text-to-speech and/or telephony
This not a new idea but I think it is relevant and not yet implemented or championed.
The main idea is to take advantage of the deep penetration of mobile phones in the "developing" world and use them to access OER (or any online resource) starting with the most prominent repositories and platforms (such as Le Mill, WikiEducator, Wikiversity, Connexions, installations of EduCommons, Kewl/Chisimba, Sakai, OER Commons, etc.).
SMS a "word" (or some other key phrase) to a number and the system reads back from the relevant OER repository using text to speech.
- Dial a number for a specific topic and:
- Select language
- Press 1 for Spanish, 2 for Portuguese, 3 for KiSwahili, etc.
- Select topic
- Press 1 for World Geography, 2 for ...
- Select sub-topic ...
- Press 1 for the introduction, 2 for learning activities, 3 for solutions, ...
- ... etc. ...
- Select language
- By SMS post a question about your mathematics/physics/whatever subject homework
- and have hints (not answers) sent by SMS from one of a collection of mentors who see the questions on a web site and send out the hints.
These three scenarios could be implemented by extending the following projects:
and I am sure there are others which could be integrated into the concept.
Technical Resources (mostly FLOSS):
- For MobilED:
- development site for MobilED
- Mobiled v2 development site.
- python-mms - a python library for the creation, manipulation and encoding/decoding of MMS messages used in mobile phones.
- Mobiled v3: Entangled - a distributed hash table (DHT) based on Kademlia, as well as a peer-to-peer tuple space implementation.
- Mobiled v3 platform - development site (code only, currently hosted by UNICEF).
- For OpenPhone:
- Dialogpalette - web site (includes the Asterisk TTS modules developed for the OpenPhone/MobilED projects).
- LWAZI - mobile access to public services in South Africa. Major project outputs likely to be usable in 2009.