sharing worldwide learning and research: informal, formal, individual and social learning, mobile, learning analytics, MOOC, AI, maker-based learning design... I love it, and combine it
Showing posts with label augmented reality. Show all posts
Showing posts with label augmented reality. Show all posts
The Online Learning Consortium (OLC) Accelerate in collaboration with Inside Higher Ed is planning two days of virtual meet-ups with eLearning experts, called virtual thought leader interviews.. These interactive interviews, promise to provide the opportunityregistration is free and can be found here, and you only have to provide an email (for confirmation and sending you the information for login).
to all attendees to enter into a dialogue with the experts as well, using Shindig as a virtual meeting tool. The
When: Wednesday, November 15th, from 11:30 am - 1:30 pm ET and Thursday, November 16th, from 12:00 pm - 2:00 pm ET (for a time conversion, have a look here).
From the event description:
Moderated by Inside Higher Ed Editor & Co-Founder Doug Lederman. Shindig’s unique technology will also enable online participants to
discuss, network, and socialize privately with one another as if they
were attendees at the OLC Accelerate Conference.The growing list of confirmed interview participants includes: Curtis Bonk, Professor at Indiana University and Owner of CourseShare, Phil Hill, Co-Publisher of the e-Literate blog, Co-Producer of e-Literate TV, and Partner at MindWires Consulting, Rolin Moe, Assistant Professor and Director of the Institute for Academic Innovation at Seattle Pacific University, Jill Buban, Senior Director of Research & Innovation at Online Learning Consortium, and many more!
Shindig also seems to have an iOS and Android app to join the
event while you're on the go. With possibly the same ability to chat
privately, submit text questions, and be spotlighted to the stage - just
like the desktop version.
Realities360 Speaking Proposals Due this Week
There are only 4 days left to submit a speaking proposal
for 2018 Realities360 Conference. Realities360 is about exploring
emerging technologies to create new and exciting immersive learning
experiences.
They are interested in proposals
exploring the design, development, and/or implementation of learning
programs that take advantage of virtual and augmented reality and
simulation technologies.
Deadline for submitting a speaking proposal: 17 November 2017
Event dates: 26 - 28 June 2018 in San Jose, California, USA.
Today I have the pleasure of attending the Posthuman
Resilience in Major Emergencies (PRiME) networking event organised by the OU,
UK. This is definitely a timely event as it launches a constructive idea
exchange with regard to what we need to think about to enable societies to be
resilient in case of major emergencies (natural and human disasters affecting
small to big regions). The main aim of the workshop is to bring together
researchers and stakeholders from a variety of fields within the future
technologies area.
The workshop focuses on emergency situations, particularly
in major events and disasters, which in today’s connected world require
sophisticated responses involving extraordinarily close collaboration between
humans and technologies. The concept of resilience has been identified as
encapsulating a highly desirable characteristic of both humans and technologies
in these settings. Although resilience has been the subject of extensive
research in various academic and technical domains, it needs to be thoroughly re-examined
in relation to the prospect of a post-human future, e.g. in 50 to 100 years, in
which human capacities may be manipulated and radically enhanced. If you are
interested in this challenge and have relevant ideas or expertise, you are
invited to join us in our upcoming workshop where the concept of resilience
will be a core aspect.
A posthuman approach to resilience might analyse networks of
which humans are only a part, or assemblages composed entirely of non-humans.
It may involve applying abstract concepts of resilience to humans and nonhumans
alike; or "pluralizing" the concept to acknowledge different ways in
which things or subjects can exhibit resilience. It may explore the
contribution of nonhuman actors to forms of stability traditionally viewed in
human terms, or seek greater recognition of diverse interests in being
resilient.
The day is filled mostly by 30 min keynotes on posthumanism,
resilience, human-machine interaction, communication, and robot technology.
Some first thoughts picked up while liveblogging:
Resilience some info (came in a bit after start of first
keynote, train travel).
From Mars exploration, space technologies, self-riding
rovers and cars. No external location info.
Use AL mapping area and computer vision & cameras. Mapping
the world in 3D, mapping where the rover is, and than plan.
Energy is limited: solar power in combination with battery.
Autonomous sensors will use battery power, the more watt’s used, the less
energy for moving around. Sometimes cheap sensors can be used, but sometimes
(e.g. challenges met) more expensive sensors need to be used. So what I tried
is modeling the terrain and looking at which type of sensors can be used. Where
the software is going to calculate which sensors can be used in terms of energy
investment. Anyway mapping the way as it is explored. In a GPS void environment
some mapping and exploration can be one, with additional energy saved. But mapping
has it limits as the exact photograph taken will provide detailed information,
but as soon as the video angle is different, different information will be
given. So, how can different pictures ensure accurate information, build from
different sensors. The mars technology is now used in tunnels, surveying
tunnels and mapping them. VR, AR tech coming from the 3D models sent out from
the tunnels, decreasing the risk for humans. But a major challenge is the data
coming out of these 3D models. Too much information to calculate. Deep learning
is an option, fueled by theoretical information, and lots of gaming industry
feedback. Steep and rapid change, every 6 months giant leaps forward. Using AI
to augment, improve and replace human actors. Current state of the art is
changing so rapidly, that it exceeds information coming out (papers, tech…). Up
to 2010 error rates were high, with deep learning, the errors have come down,
and very complex images the machines are classifying better than human beings
are doing; this can be used for any visual analysis at the moment and be used
to looking for information of interest.
Autonomous robotic for surveillance, that way minimize risk
for humans and visualise or provide detailed information, plus dealing with
problem of lots of data.
Another big problem is the human-machine interaction, as the technology (now) does not understand the human communication. The interface to communicate with human/machine.
(inge: makes me think off a lot of internet of things problems revolving on energy versus tech action. )
With Microsoft providing a glimpse into its hololens project (related to Windows 10 options), I felt it would be good to recap on augmented reality, and see what thoughts come to mind when thinking about the exiting hololens endeavor.
History in the blink of an eye
The concept of augmented reality has been around for quite some time. More then hundred years ago (1901) Frank Baum - author most famous for the Wizard of Oz - came up with the idea of an electronic display that put a layer over the real life world. As graphics and computational power increased, Steve Mann (who came up with the wonderful concept of Sousveillance, appreciated by activists everywhere) came up with a system that put text and graphics over a photographic image, creating an augmented reality, the eye-tap.
This is where it becomes of interest for education, as simulations become a possibility. By 2008 augmented reality is being rolled out for the masses: wikitude, Layar, and the inevitable introduction of augmented use in marketing (printing, buying via qr-codes...e.g. metaio) becomes possible through the use of mobile phones with apps.
Augmented reality as performance enhancer
As augmented reality becomes more mainstream, the public implementations, and job performance options become more apparent, which leads to bigger projects. In education augmented reality has been used in video support of specific historical reenactments (now frequently used in documentaries, for example in the 3-D imaging put on top of the real world in this trailer of Archaeology of Portus). The implementation of augmented reality in professions indirectly or directly related to design, architecture, engineering are straightforward: augmented reality allows a concept or new design to be investigated with less cost, and 3D models. Augmented reality has been successfully implemented in guiding workers to do specific (new) jobs by providing them on-site virtual support on what they needed to do with the parts they needed to fit together (e.g. nice slide-deck on topic).
Mobility as a driver
The roll out of mobile technology and mobile devices was crucial for sending augmented reality out into the real, mainstream world. And those same mobile devices had something that would increase augmented options: the mobile device sensors. With these sensors multiple tracking and spatial location options became possible, increasing the overall augmented reality experience.
The mobility of all of us, pushed augmented reality into the public sphere. There were some first steps into a more holistic, augmented approach for the general public: Google glass, metaglasses... but now Microsoft comes up with the stand-alone (nice!) hololens computer.
The Hololens
The hololens offers to be a fully functional computer option that allows you to interact in a space - living room or office or park, anywhere... without markers, wires, nothing, just the device as a native instrument. And, what is a great addition: it builds upon the motion detection that was put on the map by Kinect. As such it combines human motion, with mobile sensors, to dip into all the digital content that is already out there on the Web, in the Cloud... so no more wires, just tapping into the virtual, digital world. It seems like a real augmentation of the human body and mind.
I like it, a lot. Especially because it is a native machine. And of course it offers options that no other device ever offered, as such it brings along the pleasures of tinkering with a new invention. The options for education are multiple: augment the classroom, augment housework, increase informal, augmented and immediate learning... It is a really cool tool, a nice new human instrument.
But what are the first ideas that come to mind when reflecting on possible side effects?
The promo-video talks about 'More reality than ever before' is one of the motto's of the hololens. And I can see how this seems like a truth, but is it? Because with our brains, there are only so many inputs that can be processed. So, we might be able to gain time when using the hololens (no longer having to find wifi first, or other barriers that limit immediate access to content), and time might be used in an optimal form due to the merging of data (e.g. recognize face - know what they do or expert at - so immediately strike up a conversation - or not), but reality is the sum of all things, and our concentration picks up whatever we are searching for.
More virtual options for thinking over design (any field) also means that less people are necessary in those fields. What can be done digitally, must not be done manually. This will affect the job market - for those designer support jobs at least.
The immediacy of the information and augmentation also makes me wonder about the immediacy of propaganda. Photoshopping will be immediate, merging live events with fake objects/people and streaming them as if it is real.
And inevitably the barrier of us humans becomes clear once again. We invent things, apply them, but we never seem to cross over to the other species, the super-human, or the non-human. Anything and everything we do seems to mimic humanness... I wish we could get over that.
The Specific Absorbation Rate (SAR) information would be of interest to me, as it is a stand-alone device which is worn close to the brain.
Well, fun to reflect I think. And, what a cool tool the hololens seems to be! The link to the live event of the hololens can be seen here. But I rather share the Hololens trailer below:
The wearable technologies are booming business, but a lot of it is still very expensive. And with Google just releasing today that Google glasses will be reinvented, if not stopped, it got me thinking about cost and educational options. Just think about all the developers that bought the Google glasses (1500$) and now get the news that the project is being reinvented. Or about those schools that purchased one set allowing students to research its functionality?
Certainly when looking at smart glasses, there is a lot of expensive material (coming) out. As multiple options are being launched (or are on the verge of being launched) I do wonder what to go for, budget wise. For if the half-life of tech is only about a year... it might not be wise to invest in it? Time or budget wise.
Cheap virtual reality and smart glass solutions are increasingly being rolled out, but as with all technology: multiple companies are trying to corner the market, but in the end only a few will keep on standing (and it seems tech launches and halts happen quicker than ever). A couple of nifty options: the 'classic' Google glasses which is now being rethought, the more advanced Meta space glasses, the more stylish looking (yet with wire hanging from ear) Antheer lab option, of course for gamers the Oculus rift or the about to be released Sony's Morpheus, and for the more cognitive oriented among us the EmotiveInsight headset which is said to be available in April 2015 and which monitors brain activity. But it does cost a lot of money (ranging from cheapest 350 $ to 1500$).The latest from Microsoft is Microsoft's Hololens which merges virtual and IRL nicely together.
On the one hand it is clear that smart-everything is the way forward, but the cost of each item makes it tricky to test all of them in order to find its educational value. Using such tech in classrooms or global courses is at the moment cheer impossible, unless... you choose for the cheaper option: e.g. Google cardboard. This virtual reality app/option allows everyone to either build a Google cardboard from the Google cardboard kit which turns a mobile into a virtual machine, or to buy a cheap cardboard box to be used with a smartphone (and apps which you can search for depending your mobile operating system: e.g. android - Google play, but also to be said working with iPhone ).
What is interesting when looking at all these smart technologies, is all of them rely on crowd-development to provide more meaningful features or applications for their hardware solution. That of course does have a very interesting educational bonus, as it is clear that this supports peer knowledge creation based on a API's or other boundaries provided by a couple of experts. An interesting shift that has been increasingly growing the last couple of years. The same is possible for the cheaper options as well, as such it makes these options (like Google cardboard) a nice jumping board for young developers with a knack for programming or creative solutions.
Looking at some options that are out there for Google cardboard (some of which are also available on the more expensive gears, like oculus rift): Tuscany house: a nice application that shows the opportunities for design and architectural simulations that can be made in class. A more old school tech option: hang gliding and flight sim(ulation) app: decades ago, I was using flight simulator to get a feeling of what it took to fly a plane. It was (and is) fun, and it is instructional as simulations allow a more authentic preparation for the actual IRL action. Or more subject matter related options: e.g. moon, which takes you to a virtual moon surface.
All of the apps can offer educational value, but I keep wondering what the extra bonus would be. What can it teach us that we were not able to be taught in the past. What does it allow me to do, that really lifts learning to the next level? All in all, I see the smartglasses as performance enhancers, more than re-imagining education. The simulations bring real life, authentic learning closer to home; designs can be viewed in 3 dimensions, ... but it seems they all keep within learning/teaching that already existed. Just wondering what it could be, what I am missing.
Google cardboard assembly picture from here. And a really nice, short description of the Google cardboard in this YouTube movie:
David Parsons from the Massey University in Auckland, New Zealand has developed a mobile business consulting game. The game is not yet fully finished, but he and his team offer us the means to go and install the game on our Android mobile phones and give it a try.
The nice thing is, if you are a bit familiar with XML, you can adapt the code to fit your Google map based location and play in your setting. The game also has augmented features, so it is nice for a variety of reasons.
How to download the game to your Android and tweek it to fit your setting, can be seen in this recorded webinar (26 min). The zip-file of the game and the configuration documentation can be found at this wikipage: https://mobimooc.wikispaces.com/Serious+mLearning+games
The webinar is part of MobiMOOC2012 a free, open, online course on mobile learning that ran in September 2012.
Augmented reality is moving us slowly moving towards the future of embedded intelligent technology. In this webinar recording from Victór Alvarez he gives a comprehensible overview of what augmented reality is and can be.
In addition he shows Ariane, a really wonderful and promising AR-tool for teachers or trainers. The video of Ariane shows the application that will be available on iPhone, iPad and Android soon and which gives teachers and/or trainers immediate, intuitive and simple access to embedding the outer world into a meaningful learning experience. You can watch the Ariane video here (but do know that in this video you only see the application in action, while in the embedded video below Victór also explains how it works and why it is build as it is): http://www.youtube.com/watch?v=Ds-t3TUidOo
Or you can look and listen to Victórs wonderful overview on AR and Ariane below.
This webinar is part of MobiMOOC2012, a free and open, online course on mobile learning that took place in September 2012.
Augmented learning is one of the revolutionary technologies
of contemporary learning. Augmented technology is still emerging as applications
are being explored. But the learning experience offered by augmented solutions
totally fits the mobile learner. VÃctor Alvarez from Spain is one of the
augmented learning experts during the MobiMOOC course. During the augmented
learning week (22 - 29 September 2012), he will focus on what you can achieve with augmented learning
and how you can start with embedding it. Here you can find the augmented learning wikipage that
offers an overview of what VÃctor will cover, in addition to this he is already
looking forward to hear all of our experiences, augmented hopes and dreams.This presentation already gives a brief
overview of what you can expect.
MobiMOOC is a free, international, online course on mobile learning
(mLearning) that starts on 8 September and ends on 30 September 2012. The
course is open to all, so join us to collaboratively enhance our mobile
learning knowledge.
Participate in the MobiMOOC Award: if you participate in the MobiMOOC
award, your mobile learning project overview can earn you 500 USD. See here formore information.
Topics: 12 mobile learning topics will be covered, guided by expert
facilitators on the subject, but always aimed at sharing all of our experiences and plans.
Topics covered: introduction to mobile learning, planning mobile
learning (Inge de Waard, Belgium), mobile learning curriculum (Adele Botha,
South-Africa), corporate mLearning (Amit Garg, India), augmented learning
(Victor Alvarez, Spain), global mLearning topics (John Traxler, UK), mobile
health (Malcolm Lewis, Australia), mobile activism in education (Sean Abajian,
USA), serious games with mobiles (David Parsons, New Zealand), mobile learning
theory and pedagogy (Geoff Stead, UK), mobile learning tools (all of the
participants), mLearning for developing regions (Michael Sean Gallagher,
Korea/USA/UK) and train-the-trainer solutions (Jacqueline Batchelor,
South-Africa).
Format of the course: the course is built around the format of a
Massive Open Online Course, which means that it can cater to any number of
participants (no limit: Massive), free for all to join(Open), the course locations are all
accessible online (Online) and it is a Course.This format is based on connecting all the participants together, and
collaboratively start dialogues, discussions and exchanging ideas.
Project Glass from Google research starts out as a personal assistant type of technology. But ... even in the short 2:30 minute YouTube movie you can see how it could be used for future learning purposes as well. It definitely adds to the mobile learning options as well and most of all, that would result in only having to use one - 1 - tool, a thing that sits on my nose daily anyway, my glasses.
Just imagine, combining these glasses with augmented reality and gesture-based learning (keep on top of new innovations at the Kinect education site)! For real! I would definitely travel to Rome and walk around all the sites, learning and viewing ancient Roman traditions, crafts, historical reenactments ... Everything is there: the Roman itineraries have been laid out (http://omnesviae.org/), there are 3D walks through the city of Rome and its architectural glory (http://earth.google.com/rome/) and add simulated mobile reality (http://www.youtube.com/watch?v=NliEGCnlSwM) to this and ... you have got a whole different immersive learning package!
The only thing I want to know is ... how to turn it off and stay off until I want my glasses to start providing me information and interaction again. Too much media overload never works for me, but when I like it or need it, it is simply wonderful! It would enable anyone to tap into the stream of information on the Web and ... create knowledge on the go.
With the weekend coming up, the 2011 Horizon report is a good read (40 pages), or you can also access it through the educause link (click on the pdf-logo near the end of the page).
The New Media Consortium has an annual habit of looking at contemporary, emerging and future trends in education. And lets face it: education is being redesigned on a weekly basis by now, so it is good to stay on top and pick out those topics that might interest you.
So what do they see as hot educational innovations to watch out for?
Time-to-Adoption: One Year or Less Electronic Books Mobiles
Time-to-Adoption: Two to Three Years Augmented Reality. Game-Based Learning.
Time-to-Adoption: Four to Five Years Gesture-Based Computing. Learning Analytics.
The thing I am looking forward to is the gesture-based computing. Yes, a bit like X-box Kinect, SixthSense, ... which to me erases yet another middle man that keeps us from learning close to the body and brain. So, I see this as a big learning enabler.
The learning analytics fit in closely with our global move forward to a semantic web, where the data gathered from all of us (in this case as a learner) pushes our learning capacity, because it will filter out those learning bits we need the most (or those that the learning/teaching algorithm will think we need the most). The Learning analytics are closely linked to the almost finished course of LAK, that I wrote about earlier.
All of the above mentioned learning technologies are all provided with links for further reading, so give it a go.
Some of the challenges mentioned in the report are also important for today's learners:
Digital media literacy continues its rise in importance as a key skill in every discipline andprofession;
Appropriate metrics of evaluation lag behind the emergence of new scholarly forms of authoring,publishing, and researching;
Economic pressures and new models of education are presenting unprecedented competition to traditional models of the university;
Keeping pace with the rapid proliferation of information, software tools, and devices is challenging for students and teachers alike.
So, yes, I think html5 is quite innovative and augmented reality is as well, but .... here come the Black Eyed Peas with an incredible fabulous iPhone application that puts you - as a music listener - in the middle of their video-clip (great tip from Gary Woodill, thx!).
Will.I.Am from the BEPs surely knows how to take technology to the next artistic level! And I must say that this is again an illustration how art can lead the way for education.
Just imagine that you open up your smartphone or tablet and you want to learn about the battle at Waterloo where Napoleon got his bottom kicked. Not only could you now enter that battle field, BUT you could also see how Napoleon got defeated and what tactics were used ... all the while you could look at the battle in 360 degrees, simply by rotating your smartphone or table. Now how cool is that!
If you want to get a feel of what they did and you have an iPhone, rush to the iTunes store and search for the BEP360 (Black Eyed Peas 3D 360 application) and put it on your iPhone (it does cost 0.79 cents). If you don’t have an iPhone, you can see the effects of the new 3D app in the embedded video demo as well.
Ohohhhhooohhooooowwwww, sometimes I wish I would know some of the great visionary artists!
Learning within context was a difficult task in the past (travel, content design...). But with the ever growing (and simplifying) augmented mobile learning, it becomes a very feasible way of getting learners up to speed with the latest knowledge.
Situated learning was first proposed by Jean Lave and Etienne Wenger as a model of learning in a Community of practice. At its simplest, situated learning is learning that takes place in the same context in which it is applied. Lave and Wenger assert that situated learning "is not an educational form, much less a pedagogical strategy"(Jean Lave and Etienne Wenger (1991) Situated Learning. Legitimate peripheral participation, Cambridge: University of Cambridge Press p. 40). And I am in to that concept!
Let's be honest, if you read this definition, as a teacher, trainer or educationalist, you just want to read up on it. In case you doubt whether this could be of interest to your learners, doubt no more, just take a look at this simple situated simulation video proposed by Gunnar Liestoel (Norwegian 'oe') and his companions. Before looking at the video, know that it is rather easy to make a situated learning mobile lesson if you use wikitude (I had the pleasure of meeting Gunnar at mLearn2010, this is the blogpost on it). If you want to read up on Sitsim, look here for the project description:
Thom(as) Cochrane is a gray bearded man who is silently, very knowledgeable and who started of with an eye-catching movie excerpt. This is also a real project to check out and definitely a man to follow! Look at this massive illustrative website http://web.me.com/thom_cochrane/MobileWeb2/
The journey starts with visuals from mobile movies from architecture students. These movies were build in teams, that twittered to be connected, looking at the architecture of a Maori architectural objects. The used Wikitude to enhance the real world architecture.
The project is all about social constructivism to facilitate mobile learning. (note from me: to far for my QRcode reader to connect) How can we use the tool to bring about social constructivist learning. The lecturers got interested as the project moved into the next years, which has lead to complete integration of mobile learning for graduate degrees. Each year new curriculum's appeared, new projects, within each project the actual learning was researched.
The learners were not as digitally savvy as was expected (not digital natives, and this has not changed much during the years)
There is a Learning management system, but this is not essential to the projects (sometimes they use wikipages).
Critical success factors gotten out of all these projects:
lecturer modelling of the pedagogical use of the tools;
creating a supportive learning community;
intentional COP reproduction reconseptualising teaching and learning;
appropriate choice of supporting technologies (WND rubric which also looks great)
technical and pedagogical support;
staging, scaffolding and the PAH continuum;
ontological shifts: reconceptualizing teaching and learning.
Abstract (as described in the proceedings) This paper discusses six critical success factors for mobile web 2.0 implementation identified throughout fifteen mlearning action esearch projects carried out and evaluated between 2006 and 2009. The paper briefly outlines the implications of each of the five learning contexts involved in the projects in light of these critical success factors. The resultant development of strategies for future mlearning projects in 2010 and beyond are also briefly discussed.
Conclusion (as described in the proceedings) Mobile web 2.0 is a continually evolving environment with new technologies and affordances developing at an astonishing rate. However this research has illustrated that by identifying and putting in place strategies to support mobile web 2.0 critical success factors it is possible to transform teaching and learning.
Based on Wikitude and depending on the user they can choose different applications.
The way forward to sustain such a system: have the plumbing in place. Get information from stakeholders (commerce, public). The end users were surveyed beforehand to know what they would like to know. Their was also some space for commercial information from the commercial entities.
This project got a Learning award in Munich just last week. They are interested to join forces for future projects.
they will now enhance it with Gyro (all movements) and implement it for iPad and iPhone.
Abstract as mentioned in the proceedings The Virtual Mobile City Guide (VMCG) is a mobile application which aims to provide the user with digital equivalent tools which tourists normally use while travelling and provides them with factual information about the city. Using Android technology, the VMCG is a mash up of different APIs which together with an information infrastructure provides the user with information about different attractions and guidance around the city in question. While providing the user with the traditional map view by making use of the Google maps API, the VMCG also employs the Wikitude® API to provide the user with an innovative approach to navigating through cities. This view uses augmented reality to indicate the location of attractions and displays information in the same augmented reality. The VMCG also has a built in recommendation engine which suggests attractions to the user depending on the attractions which the user is visiting during the tour and tailor information in order to cater for a learning experience while the user travels around the city in question.
conclusion as mentioned in the proceedings The concept of having a mash-up application designed to assist tourists during their visit was welcomed by many during the evaluation and promises positive prospects. It was also shown that the Android platform provides the adequate environment to develop such applications. The graphical user interface (GUI) was given special attention in the design of the VMCG but more diverse user interaction methods such as audio should be sought. The GUI should also be developed to cater for directions and improve accessibility by allowing varying text size among other possible adjustments. A suggestion which also emerged from the evaluation was the possibility of having the VMCG in different languages. In the effort to also meet the solutions proposed by Brown and Chalmers, the collaborative aspect of the VMCG has to be developed by possibly allowing connection to social networking sites. The evaluation also showed the lack of willingness of users to update the information in the application while they travel. This can be overcome by designing business models which enable incentives to the user. The guidebook aspect of the VMCG should also be developed by providing more in-depth information and also illustrating possible transport connections to other cities in proximity to the city being visited. In this paper we also presented how to apply different technological developments in the field of tourism in order to address the needs in the latter field. The views presented in the VMCG were satisfactorily welcomed by the potential users. It was concluded that the GUI considerations are important for the users of the system and more effort should be invested in order to allow different and rich ways in which the users can interact with the system. The development of this application in the context of a framework which caters for both pre-visiting and postvisiting is our next challenge.
Gunnar is a curly person with glasses, who just gives an AMAZING presentation!! Please if you can, look this research up! It is so strongly build (sustainable, new, well-thought throug). Just amazing!
Mostly working with digital media, on the textual level (the individual digital text and the new genres of new media). So they try to experiment with new media's. Normally the social science follows the hard science, but then they are not the constructors of the media. this is why they want to put the things together, construct the technologies upfront. They want to come up with new media and then suggest certain devices to employ and exploit built upon their media criteria. they used: GPS, sensors for movements, compass working with AOS (Apple), but they also want to go with android platform. They focus on 3D, thinking about this as a prototype genre that does not yet exist.
With this augmented reality, they wanted to show the parthenon and overlay information to give a complete, comprehensive experience of the Parthenon. The mobility and the movement of the perspective also give extra value. In May they made a prototype from the Roman Forum (e.g. Caesar's temple where you can access the classical text). they also created a feature where students and teachers can add their own links or texts. the teacher can host their presentation in relation to its context (WAW, GREAT). And the students can post their assignments. As such the AR environment will work as a memory chamber of stuff you want to learn about.
They did not only want to display the temples, but also give a simulations of the events leading up to the temple of Augustus. The different events that lead to the building of the temple, are all open for the learner to explore, e.g. Marcus Antonius giving a speech. There is multimedia content, text, links... yes, full and useful (sorry, but I am enthusiastic! this is augmented learning with a vengeance!).
Testing and evaluation: tested with 3rd and 10th graders, the whole system was intuitive and easy to use. Two similar test groups. in February 2010 with Parthenon with classical students, they said that the mobile should not be 'competing' with the real objects. These same students were used for testing the Julius Caesar temple, they were enthusiastic and liked the fact that they could add their own comments and the dramatic features, but again they said this should not replace the real academic professor, but that it could compliment him/her. this will be made available to the Apple marketplace as an application for adult users.
Abstract as described in the proceedings of the conference We here describe experiments with a potential mobile augmented reality genre for learning, a so-called 'situated simulation' (sitsim). Several prototypes and their key features and functionalities are presented and discussed as they have evolved over several years of design and development work. A particular focus is on the use of sequences of events and actions in the virtual environment. This opens up for new kinds of story lines and narrative structures, which are then described and discussed in relation to narrative theory. Finally, design features for further research and development are suggested.
CONCLUSION – FURTHER DESIGN AND DEVELOPMENT (as put forward in the proceedings of the conference) The development of the sitsim genre prototype is conducted in the context of digital genre design. An overarching approach to this endeavour is to develop a method for how to create innovative communicative and expressive forms based on emerging digital technologies, such as mobile augmented reality on regular smartphones. Feedback from the user testing shows that we are on the right track. The purpose now is to make the sitsims available to larger user groups, for example via Apple's App store for free download. This will hopefully generate more feedback so that we, the developers can explore the potential of this 'genre' further. In future versions of the Temple of Divus Iulius sitsim we also plan to include different interpretations of both objects and events. The current version of the Temple is Corinthian and based on german scholar Christian Hülsen's interpretation, but the temple might have been of the composite style. Descriptions of the events surrounding Marc Anthony's speech also differ, depending on which classical source one read, Appian, Suetonius, Dio or Plutarch. In the current version we used Shakespeare's interpretation from his tragedy Julius Caesar, which was again based on Plutarch's account. To be able to switch between alternative interpretations of historical data/accounts will add a valuable dimension to the application Learning is contextual. It is a function of the activity and culture in which it occurs. Lave and Wenger (1991) call this pedagogical approach “situated learning.” In situated learning the contextual space and place are central. With mobile augmented reality and situated simulations it is possible to support and extend the “situatedness” of learning and education in new ways by means of information technologies (IT). This is not limited to historical topics as described above. It extends to any discipline or subject matter that may benefit from making present what is absent, be it past, current, or future topics. The combination of the real and the virtual (what is simulated) also provides added experience and value. It gives the learner information from multiple sources—what Gregory Bateson in his epistemology has deemed “double description.” (Bateson 1988) In his view, the combination of two sources of information generates a new type of knowledge and experience, as is the case with binocular vision (of depth). The notion of double description has been an important perspective in combining the virtual and the real when designing the sitsims presented here, and we believe it has a great potential for future solutions.
Canada is a magnet for interactive jobs on learning. There is a big demand for well educated people that are into interactive media.
Engagement in learning is key, and augmented reality enhances the real world, increases
QR-codes that trigger added layers of 3D 'roman centurions' ..., if you put it on a record player, it turns around, nice gimmick :-)
SSAT website (http://www.ssatrust.org.uk not sure of this website), look at it for the free (anatomic trigger, lung heart, blatter...). Sometimes two markers are used to build interaction: multiple tracking.
(remark to myself: wear a QR-code t-shirt for your presentation)
types of AR
accelerometer, gps, and compass
CV - camera reads maker
CV - camera reads images/objects.
The AR gets people into conversations, into discussions, increasing understanding.
Great interaction with markers, that trigger at learner relevant moments or locations.
A different human interface Confucius: tell me and I will forget, show me and I will remember, involve me and I will understand.
eyePet from PS3: he does different things, ongoing interaction with your pet through your ps-camera (realllllly nice: used by Foxhill Primary School. This increased their confidence because the classroom felt more creative.
Also linking to SecondSightExperience: Romeo and Juliet Pilot. Markers were put down that triggered the students.
Culture and Heritage are nice AR opportunities, language understanding... Wroxeter Roman City, English Heritage (Should see this). NT Cragside - Lord Armstrong.
check out: Lustucru - in the highstreet (upper curve of application).
This is how I envision the learning future. This presentation came about while following PLENK, a big online course on Personal Learning Environments, Networking and Knowledge. Alan Levine's comment triggered the presentation below, which is a reply to the question he posed: is augmented reality really going to revolutionize the world.
This is my narrative powerpoint which embeds content from people I learn from. In order to actually see the movies that are embedded in the slideshare, you need to download the powerPoint.
For all you mLearning lovers out there, there is a clearly written, informative and inspiring report (56 pages) on location-based contextual mobile learning (mostly across Europe). The report is just published and it is edited by Liz Fitzgerald - Brown from the University of Nottingham and Mike Sharples has written the foreword.
The report follows on from a 2-day workshop funded by the STELLAR Network of Excellence as part of their 2009 Alpine Rendez-Vous workshop series. Contributors to this report have provided examples of innovative and exciting research projects and practical applications for mobile learning in a location-sensitive setting, including the sharing of good practice and the key findings that have resulted from this work. There is also a debate about whether location-based and contextual learning results in shallower learning strategies and a section detailing the future challenges for location-based learning.
The 56-page PDF is available FREE for download here. The report also has an amazing set of references embedded.
There’s some really nice work in the report, including case studies from maths, geography, psychology, architecture, Geocaching, zoo education and ethical issues with location-based mobile learning. It’s a fantastic resource for the community, providing a good introduction into the issues surrounding location-based learning.
So rush over and download the report to your mobile, or other device, and have a good read! And of course, also share any experience you might have with location-based mLearning. I am always eager to learn from you all!
Sometimes the steps in between a learning applications are not mentioned, however all the steps leading up to a result or to the use of a tool can be interesting and possibly helpful. So I decided to take hold of all the steps leading up to a new Augmented Reality (AR) mobile learning application.
Why do I look at AR? Because it can make learning outside the classroom even more captivating, because AR allows a teacher/informal learner/... to add a virtual layer of information to the real world. The nice thing about AR is also that it lights up, only when a person is at a certain point. This means that as a user or a learner you only get that information that is relevant to you at a certain location and time.
So what did I do today?
Today I got my newly bought HTC desire ready (well, I bought it two months ago, but I kept using my other phone while trying out the HTC desire functions).
Secondly I got a Layar developer account (http://dev.layar.com/). This is building on a previous post that I wrote in June 2009 when Layar got their first big media attention. What is Layar? Layar is currently a top tool if you want to try out AR for mobile phones (iPhone 3GS and Android phones). It enables a user to build a layer onto the real world. For example: if you are standing in front of the Eiffel tower, and you are wondering where to find a good, real French bar, you take out your mobile phone, activate layar, and look through your phone to the surrounding areas of the Eiffel tower (well, words do not tell the whole picture, so feel free to take a look at the Layar movie below. But how does this work? Well the technical logic behind it is quite complex, but in simple terms: if you have a compass on your mobile phone and you have a GPS possibility, the phone will be able to calculate where you are standing AND in which direction you are aiming your mobile device.
And last but not least I got a Hoppala account (you have to get a Layar developer account before you can get access to Hoppala. Now Hoppala is an easy step up to Layar, as it enables you to simply add some information to the real world by writing information that can be added to a location (also here, feel free to look at the movie on Hoppala below).
After this, I created a Layar of my own, a simple one: arlearning. I will add a link to the layar as soon as I have some content on it, but that will be for a later post.
When I was testing out Layar as it exists in my location, I stumbled upon Tweepsaround. This is a wonderful location based application, that gives you the time and content of tweets that were written nearby your location. You can even choose to go to the location (via google maps) where the tweet was sent. Really nice! Just to give you an idea, it took me less 1 hour and 15 minutes to get all of this together (not counting the wait for the Layar developers code to arrive, as this can take a day).
Let's look at some describing videos:
Layar (this is not the latest promotional video of Layar, but the more comprehensible older one):
And now for Hoppala, the first mobile app I will be using to create a learning app:
Have fun and share if you have a layar! I would love to have a look.
If you only have a limited budget, but you do have a need to build scenarios that show human-to-human communication, this might be a good software to use. Me, I want to learn it as a next step towards augmented learning. While gearing myself to develop an augmented reality project (triggering virtual reality to be placed on top of a real life setting), I have one other obstacle to tackle: learning to build an animated character or object that can feature in an AR environment. There are many software's on the market for building your own animated movie, but someone suggested Moviestorm and indeed it works magnificently!
Short description of the software
Moviestorm is an animation movie software that has a free version (it was completely free, but one of the founders of Moviestorm just informed me that it is no longer for free, you can still try it out for free during 30 days), you can also purchase extra features if you want to in the ‘marketplace’ of the software website. The software enables its users to build animated movies featuring virtual characters in a self-designed decor. As a user you can set the stage, influence lighting, add and change props and characters, use different camera angles for the movie (even add camera viewpoints), edit both audio and video, and publish the final product to any audiovisual carrier which is open to the format (AVI format).
How can Moviestorm be used in a digital learning environment?
While learning Moviestorm software allows me to use its results in an augmented reality learning situation later on (yes, a steep learning curve lies ahead J ), it also has other learning situations in which it can be applied. It allows me to use the animations to create human-to-human interaction learning examples, without having to engage real life actors, clinics or laboratories, thus saving costs. The human-to-human interactions might be of interest to psychology, sales, medical or other learning settings where human interactions can be illustrated as ‘best practice’ or completely ‘not done’ scripts (if you cut an artery, this is what happens...), adding examples to support the theoretical manuals already available in the courses.
Coming to grips with Moviestorm
In Moviestorm you can choose to start from scratch, or to build your first movie using existing sets, characters, furniture or other props. I choose to start from scratch, building my first movie from an empty set (I learn best when a challenge is in my face, the learning curve is steeper, but the results are – most of the time - stronger).
Different steps to take while building a movie with Moviestorm Setting the scene: In order to have the feel of an AR setting, I wanted to use a real life picture. So I climbed to the highest point of the institute’s building, and took a panoramic view of the institute (= the background). In addition to the background I added some features (microphones) that would fit the setting I had in mind for the movie. Additionally, I changed the lights on the scene and I added some walls on the side, allowing me to let the characters enter the ‘roof’. This was not really necessary for the AR later on, but it gave me the possibility of using a door movement, this was a good extra exercise for possible future scenarios.
Building characters If I would want to use characters in my AR that resembled different persons, I needed to customize the given characters in the Moviestorm movie as well. So I choose to go with four different characters, but building them with the garments (casual, formal…), facial types (strong eyebrows, cheekbones..) and hairstyles (beards, moustache, glasses or not…) from the free library of Moviestorm. For more professional garments (doctor, nurse…) I would have to buy an additional set from Moviestorm.
Adding motion Once I had my setting and my characters, it was time to put them onto the scene and make them move. This is where the Moviestorm tutorials really came in handy. Moviestorm offers a wide variety of movements, and where the building of characters and the selection of the setting is a bit straight forward, getting all the movements in was a bit of a challenge. It was not enough to pick-up a character and place it somewhere else. For if I would want to make the animated movie look more like real life, I had to put in extra gestures to bring the characters to life (well, I must admit that was one of the guidelines suggested by the tutorial on movement, it did make sense as at first my characters were rather static). The characters inside Moviestorm can also pronounce words or sentences, meaning their lips will move when you type in the text they are supposed to say. This lip syncing is also considered as a motion. The actual audio that accompanies the text those characters sync must be added later on, or can be embedded using an existing audio file.
Adding sound To distinguish between the different characters, I opted to use different tones of my voice to give the characters distinct voices. This is where the off key voice comes in as well (yes, I am a lousy singer). Inside Moviestorm you can record ‘live’, and as such link the live recording to the text spoken by the characters. But you can also add your own edited audio to the text by importing a sound you made beforehand. To get my audio edited I used the free audio editing software Audacity. To make the intro of the movie more ‘rooftop’ like, I added an ambient sound that was available in Moviestorm: the suburbia 02 sound (driving cars, etc).
Editing the movie After putting all the characters through the motions, the movie could be edited. Again the Moviestorm tutorials come in handy. The tutorial offered for editing only touched the basic possibilities, and for me that was not enough to grasp it. I edited the scenes in the wrong way, pasted content which was not relevant to scenes… made a mess at first. But by going through some of the peer tutorials, it became clearer what I had to do, and how I could build and cut the complete movie. In the editing process you could also switch between camera’s.
Keeping on top of the movie As the movie is build, the user of the software can always go and take a look at the product as it was or is at that point in time. This part was easy and straight forward, as it feels like using youtube movies: play and stop button, fast forward options, time line…
Publishing The final and definitely most straight forward function in Moviestorm is publishing the movie. You can publish a movie by simply selecting the format size and clicking the ‘publish’ button. Really easy.
Well, I still need to work on my audio files, apparently the high pitch sounds in the audacity files resulted in pitched sounds in the final movie (not while I was listening to the preview of the movie, so I need to see what really happens in the audio rendering). Overall, I was really happy that it worked, one step closer to AR, Yeah!
This is a very nice challenge, it is short and this question allows you to focus on the gut-feeling you get when looking at all the new emerging technologies.
With the mobile technology increasing and all of us eLearning providers grasping what benefits mobile learning has, I feel confident that the most innovative corporations and academic institutions will invest in their learning environments. These investments will embrace the new learning opportunities that result from adapting a new sense of learning: augmented learning, and a new way of learning: ubiquitous learning.
Augmented learning Through augmented learning a completely new layer is added to the real and virtual learning that is happening right now. Augmented learning allows us to perceive the real world, and add an extra learning layer to it. This learning layer might be delivered above the existing real world we observe (e.g. the mobile browser layar ), or it can allow us to recognize the real world and get information on it (e.g. google Goggles), or an augmented reality can be brought to live via a barcode attached to a camera (e.g. ARToolKit).
These innovative learning techniques seem to be far off, but they are actually already being incorporated in some corporate learning environments (see this finish example where engineers get on-site information from which they learn what can improve, and which enables them to give immediate feedback to the site again: link to the plant )
Augmented reality allows you to get more information than you can see with the naked eye. And because it can be linked to mobile devices, you can have that information where you need it.
Ubiquitous learning Our learning environments have evolved. Until only a couple of years ago learning was rather linked to a certain location: your desk or school. With mobile learning on the rise, learning can happen apart from location (and whenever you want it). But in order to do this, the learning environment must be adapted so this ubiquitous learning can occur.
In order to do this the learning environment must be able to cater to a variety of technologies (sometimes you connect with your computer, sometimes with your mobile, sometimes with someone else's device...) and it must be build so the learner can reach relevant information in a variety of ways (e.g. qrcodes, mobile internet, wifi, ...). It must also cater to the need of the learner (some of us learn through our network => social media access, some of us through a content management system => cms that allows multiple device access...).
At this point in time it still takes a lot of effort to really roll out a ubiquitous environment, but in about 5 years we should be ready to feel comfortable to live in a world from which we learn what we want, whenever we want, and with any device we can get our hands on.
Of course this does express the need for standardization both on the device side as on the connection side (oh, would not that be great!).