How to install Hypothesis

This tutorial will cover: 

  • How to install
  • How to annotate the web
  • How to make multimodal annotations
  • How to share your annotations

* instructions/tutorials are from

How to install Hypothesis

Step One: Go to
Step Two: Click on Install Chrome Extension*
Step Three: Accept permission (note menu not visible in image below)

*Firefox extension coming soon


Step Four: A New Page will open up. Go to the page
Step Five: Click on Create Account
Step Six: Use a username (pseudonyms allowed –good for student privacy but I would stick with Twitter handle for adult learners)
Step Seven: Add email, and password



Step Eight: Check your email and activate account:



How to Annotate the Web

Step One: Choose a text worthy of reading
Step Two: Click on the speech box in the upper right hand corner.
Step Three: Highlight Text
Step Four: Click on Pen



Step Five: Add an annotation
Step Six: Click Save



How to Make Multimodal Annotations

Add an Image

Step One: Find the relevant image. Copy the image url (right click on image>save image url)
Step Two: Highlight text and click on the annotation pen.
Step Three: Copy your image url into the code.
Step Four: Add an image description
Step Five: Add an optional description below. This texts will show allowing the web to be accessible to all.



Add a Link:

Step One: Highlight Text:
Step Two: Click on Link:
Step Three: Copy in Link:
Step Four: Type in the the link text


Important the bracket and parentheses must not be deleted.

How to Share Your Annotations

Tag your Annotations

Tagging your annotations lets people find them.

Step One: add the tag (note the “#” sign is not necessary)
Step Two: add a tag if this annotation refers to a specific code book tag (claim, evidence, source)
Step Three: Add an optional classroom tag as assigned by the instructor.





Recording/Projecting iOS screens with Quicktime Player

Quicktime on the Mac is one of those tools that many Mac users don’t fully utilize. I have met lots of instructors who are Mac users who are unaware that they can use Quicktime to create quick screencasts.

Recently I discovered another neat thing you can do with Quicktime is to use it to project/record the screen of your connected iOS device.

First you need to connect your device with the usb cable to your computer, then open up quicktime and select new movie recording


Quicktime will default to your facetime camera so you need to change it to your iOS device







Once you have selected your device you should see it appear on the screen











Once your device’s screen is being shown you can start recording your screen.

It’s a great way to either project your screen to demonstrate an app or a process on your device and it’s a convenient way to create a screencast of a process on your iOS device (see video above showing some aspects of the Figure 1 app).


Coalescing Learning Spaces

I wrote a blog post awhile back about David White’s Visitors and Residents framework. I had always assumed that within this framework the word “spaces” referred to just the digital/online environment, however in another post he describes space as any location where people are or where we go to be co-present with others and this includes both the online world and the physical lecture theatre/classroom we may find ourselves in.

This got me thinking about technology use in the classroom, in a lot of cases faculty either ban the use of smartphones, laptops etc. due to concerns they have about its distractive tendencies or generally they don’t find ways to utilize them for their lessons.

This is where I think David White’s coalescent framework has a lot of value because it forces us  to envision ways in which technology can be a resource as opposed to being a distraction. As David White suggests we need to design pedagogy which coalesces physical and digital spaces. If we can be explicit about the value of integrating both physical and digital spaces when we work with faculty, and show them successful examples of coalescent designs they may be more receptive to the idea of students using devices in class. 



A brief look at chardin.js

I recently came across a neat little tool called chardin.js which allows users to insert instructions on pages to provide direction to users. Chardin.js is used to overlay instructions over elements on a page using visual guides which can be modified depending on the required needs.


I decided to have some fun with it and used it to overlay information on a statue outside Arsenal’s Emirates stadium (image above). One limitation of this plugin is that unlike some other plugins like intro.js which take users on a clickable  tour through an interface ,chardin js offers a more static approach. However it does have some potential for displaying helpful hints/information for users or highlighting certain parts of an image and emphasizing important elements on a page. To be continued……….



Weekly Bookmarks (weekly)

Posted from Diigo. The rest of my favorite links are here.

Exploring Interactive Text

The writing tool telescopic text came up in one of our team conversations recently, this led to a longer conversation about text and various ways in which text is represented online. This conversation made me flash back to Bret Victor’s idea of “explorable explanations” In his 2011 article he asks the question:

what does it mean to be an active reader?

In his essay Bret Victor  suggests three possible ways to facilitate active reading:

  • reactive documents: these allow the reader to play with the author’s assumptions and analyses, and see the consequences.
  • explorable examples: these make the abstract concrete, and allows the reader to develop an intuition for how a system works.
  • contextual examples: these allow the reader to learn related material just-in-time, and cross-check the author’s claims.parable




Other similar interactive visualizations can be found on

While these examples point to interesting ways to engage learners, the problem is the skills required to create such resources. Tools such as telescopic text are relatively easier to use and don’t require any special skill or knowledge to use. On the other hand creating an interactive like the parable of polygons requires some coding skills. In order for faculty to embrace and adopt  the tools will have to become more user friendly. An example of a user friendly resource is keshif  a data browser which allows users to visualize and explore data. The only step needed is to upload data via a particular format via Google Docs.  Currently I’m interested in exploring user friendly tools that can be used to augment text  to encourage active learning. Crossfilter and dc.js are two tools I’m exploring right now.

Mapping Instagram Posts

Last week I spoke with a faculty member involved in the VCU Bike Race Book project who wanted a way to map her students tweets during the course. After a couple of web searches it seems that a lot of the web services that offer tweet mapping either map just one users tweets or charge a fee for their services. Preferably we’d like to map posts with a particular hashtag/keyword and not just restrict it to one users tweets.   I know Tom Woodward is also exploring as another option.
In any case the students in this class will also be using Instagram to share so I started to look for a way to map geotagged instagram posts. I came across the Karten plugin which finds geotagged posts and images relating to specific hashtags and maps them on a google map. Karten can be used in any post or page with a shortcode and its quite simple to setup. The plugin can be downloaded from Github (zip). once installed you  need to provide the following in Karten’s settings:

Google Maps API key
Instagram API Client ID
Instagram API Client Secret
instagram API Access Token

When all of this is set up you can create a new map, select the keyword/hashtag and then embed it using the shortcode. The resulting map might look like the one above.



Weekly Bookmark Highlights (weekly)

Posted from Diigo. The rest of my favorite links are here.