This tutorial will cover:
- How to install Hypothes.is
- How to annotate the web
- How to make multimodal annotations
- How to share your annotations
* instructions/tutorials are from http://jgregorymcverry.com/
How to install Hypothesis
Step One: Go to Hypothes.is.
Step Two: Click on Install Chrome Extension*
Step Three: Accept permission (note menu not visible in image below)
*Firefox extension coming soon
Step Four: A New Page will open up. Go to the page
Step Five: Click on Create Account
Step Six: Use a username (pseudonyms allowed –good for student privacy but I would stick with Twitter handle for adult learners)
Step Seven: Add email, and password
Step Eight: Check your email and activate account:
How to Annotate the Web
Step One: Choose a text worthy of reading
Step Two: Click on the speech box in the upper right hand corner.
Step Three: Highlight Text
Step Four: Click on Pen
Step Five: Add an annotation
Step Six: Click Save
How to Make Multimodal Annotations
Add an Image
Step One: Find the relevant image. Copy the image url (right click on image>save image url)
Step Two: Highlight text and click on the annotation pen.
Step Three: Copy your image url into the code.
Step Four: Add an image description
Step Five: Add an optional description below. This texts will show allowing the web to be accessible to all.
Add a Link:
Step One: Highlight Text:
Step Two: Click on Link:
Step Three: Copy in Link:
Step Four: Type in the the link text
Important the bracket and parentheses must not be deleted.
How to Share Your Annotations
Tag your Annotations
Tagging your annotations lets people find them.
Step One: add the tag (note the “#” sign is not necessary)
Step Two: add a tag if this annotation refers to a specific code book tag (claim, evidence, source)
Step Three: Add an optional classroom tag as assigned by the instructor.
Quicktime on the Mac is one of those tools that many Mac users don’t fully utilize. I have met lots of instructors who are Mac users who are unaware that they can use Quicktime to create quick screencasts.
Recently I discovered another neat thing you can do with Quicktime is to use it to project/record the screen of your connected iOS device.
First you need to connect your device with the usb cable to your computer, then open up quicktime and select new movie recording
Quicktime will default to your facetime camera so you need to change it to your iOS device
Once you have selected your device you should see it appear on the screen
Once your device’s screen is being shown you can start recording your screen.
It’s a great way to either project your screen to demonstrate an app or a process on your device and it’s a convenient way to create a screencast of a process on your iOS device (see video above showing some aspects of the Figure 1 app).
I wrote a blog post awhile back about David White’s Visitors and Residents framework. I had always assumed that within this framework the word “spaces” referred to just the digital/online environment, however in another post he describes space as any location where people are or where we go to be co-present with others and this includes both the online world and the physical lecture theatre/classroom we may find ourselves in.
This got me thinking about technology use in the classroom, in a lot of cases faculty either ban the use of smartphones, laptops etc. due to concerns they have about its distractive tendencies or generally they don’t find ways to utilize them for their lessons.
This is where I think David White’s coalescent framework has a lot of value because it forces us to envision ways in which technology can be a resource as opposed to being a distraction. As David White suggests we need to design pedagogy which coalesces physical and digital spaces. If we can be explicit about the value of integrating both physical and digital spaces when we work with faculty, and show them successful examples of coalescent designs they may be more receptive to the idea of students using devices in class.
I recently came across a neat little tool called chardin.js which allows users to insert instructions on pages to provide direction to users. Chardin.js is used to overlay instructions over elements on a page using visual guides which can be modified depending on the required needs.
I decided to have some fun with it and used it to overlay information on a statue outside Arsenal’s Emirates stadium (image above). One limitation of this plugin is that unlike some other plugins like intro.js which take users on a clickable tour through an interface ,chardin js offers a more static approach. However it does have some potential for displaying helpful hints/information for users or highlighting certain parts of an image and emphasizing important elements on a page. To be continued……….
How it Works
“How Long to Read is a search engine that allows you to find almost any book and get your reading time for each one. Use the search bar on the home page to search for any of over 12 million books on How Long to Read. “
Team Productivity Through Slack – ProfHacker – Blogs – The Chronicle of Higher Education
interesting article on Slack as an alternative to communication, interestingcomments too…..
Jessie Hartland’s “Insanely Great” tells Steve Jobs’ life story as a graphic novel | Macworld
“the newly released Steve Jobs: Insanely Great embraces brevity—it’s a quick read at about 225 pages. Also important to note: It’s a graphic novel, translating the Apple co-founder’s upbringing, storied career, and personal life into a series of black-and-white, pencil-drawn panels. “
How Cats Became Rulers of the Interwebs | WIRED
“How Cats Took Over the Internet, a new exhibit at New York’s Museum of the Moving Image, traces the evolution of these memes from early chatrooms into sponsored cat celebrities. On its face, the exhibit sounds a bit like Tumblr IRL: cat GIFs cast onto the white walls of two galleries, adorable felines kissing and scampering in endless loops.”
Posted from Diigo. The rest of my favorite links are here.
The writing tool telescopic text came up in one of our team conversations recently, this led to a longer conversation about text and various ways in which text is represented online. This conversation made me flash back to Bret Victor’s idea of “explorable explanations” In his 2011 article he asks the question:
what does it mean to be an active reader?
In his essay Bret Victor suggests three possible ways to facilitate active reading:
- reactive documents: these allow the reader to play with the author’s assumptions and analyses, and see the consequences.
- explorable examples: these make the abstract concrete, and allows the reader to develop an intuition for how a system works.
- contextual examples: these allow the reader to learn related material just-in-time, and cross-check the author’s claims.
Other similar interactive visualizations can be found on setosa.io
While these examples point to interesting ways to engage learners, the problem is the skills required to create such resources. Tools such as telescopic text are relatively easier to use and don’t require any special skill or knowledge to use. On the other hand creating an interactive like the parable of polygons requires some coding skills. In order for faculty to embrace and adopt the tools will have to become more user friendly. An example of a user friendly resource is keshif a data browser which allows users to visualize and explore data. The only step needed is to upload data via a particular format via Google Docs. Currently I’m interested in exploring user friendly tools that can be used to augment text to encourage active learning. Crossfilter and dc.js are two tools I’m exploring right now.
Last week I spoke with a faculty member involved in the VCU Bike Race Book project who wanted a way to map her students tweets during the course. After a couple of web searches it seems that a lot of the web services that offer tweet mapping either map just one users tweets or charge a fee for their services. Preferably we’d like to map posts with a particular hashtag/keyword and not just restrict it to one users tweets. I know Tom Woodward is also exploring tweettomap.com as another option.
In any case the students in this class will also be using Instagram to share so I started to look for a way to map geotagged instagram posts. I came across the Karten plugin which finds geotagged posts and images relating to specific hashtags and maps them on a google map. Karten can be used in any post or page with a shortcode and its quite simple to setup. The plugin can be downloaded from Github (zip). once installed you need to provide the following in Karten’s settings:
Google Maps API key
Instagram API Client ID
Instagram API Client Secret
instagram API Access Token
When all of this is set up you can create a new map, select the keyword/hashtag and then embed it using the shortcode. The resulting map might look like the one above.
The Problem With Putting All the World’s Code in GitHub | WIRED
“possible to follow the development of a particular piece of software and see how it all came together. That’s made it an irreplaceable teaching tool.”
This video game is so small it fits inside a tweet
“In the same way that people write words onto grains of rice, one programmer has managed to build a game with code that can fit into a single tweet. The 140-character opus is called Tiny Twitch “
tota11y – an accessibility visualization toolkit
“tota11y makes it easy to spot some of the most common accessibility violations made by page authors today.
Beyond simply pointing out errors, many tota11y plugins also suggest ways to fix these violations – specifically tailored to your document.”
tota11y makes it easy to spot some of the most common accessibility violations made by page authors today.
Beyond simply pointing out errors, many tota11y plugins also suggest ways to fix these violations – specifically tailored to your document.
Nature and Viz | DataRemixed
“Many of nature’s patterns are both mesmerizing and incredibly informative, like the tree rings that encode important data about the life of a tree. If you can inspire as well as inform, then why wouldn’t you do both? Nature does.”
How Virtual Reality can Improve Online Learning > ENGINEERING.com
“online courses also limit you in some ways — there’s little immersive or tactile interaction, and sometimes it’s hard for students to engage with the material. IVR [Immersive Virtual Reality] systems are a potential solution to that problem.” “
Posted from Diigo. The rest of my favorite links are here.