The Internet and new technology gives rises to many new ways to research, analyse and evaluate. Many trends can now be analysed via Google analytics, Google blog search, Twitter hash tags, YouTube hits and ranking, Facebook analytics, Amazon rankings and ratings, and other methods. This has lead to many new and exciting possibilities for data gathering, research and evaluation. The fact that much of this data (thanks to technology) now comes to us easily, and at no cost, is of great benefit, particularly to small organizations such as mine.
For example, as my organization moves into creating a social enterprise, we are using Google analytics to select training topics and also pick the best topics to write e-books to sell on Amazon. Knowing what is popular and where gaps are greatly increases our chance of success in building a strong social enterprise. Google analytics is also an incredible tool for our website both to evaluate user trends, but also to build our website with the end user in mind. To use another example, Facebook automatically tells me each week about all of the trends for our organization’s Facebook page. Thanks to this free feature, I can readily understand the trends and adjust the content or the marketing of our Facebook page accordingly.
On a separate topic… this week’s reading started with the intriguing comment of:
“Thus connectivism is perceived as relevant by its practitioners but as lacking in rigour by its critics”. (Frances Bell, Connectivism: Its Place in Theory-Informed Research and Innovation in Technology-Enabled Learning. http://www.irrodl.org/index.php/irrodl/article/view/902)
Immediately my ears perked up! I work in a sector (nonprofit, charitable sector) and a domain (adult literacy) that both endure the ongoing struggle with the fact that we do important, relevant work but we are constantly criticized as lacking in rigour (or sound evaluative data).
Some immediate questions jumped to my mind regarding research and evaluation: Can all services be quantified and sold at a price to satisfy the bureaucrats among us? Does everything fit nicely in a box the way some would like? Is life neat and tidy the way government funders would have it? Can everything worth doing readily be evaluated?
As noted in the above article, evaluation techniques will vary depending upon the scope and purpose of the evaluation, the funding available for the evaluation, and the skills, experience and philosophies of the evaluators. However, I would add as well, that the evaluation techniques will also vary based the goals of those driving the evaluation and whether the evaluation is for internal purposes (such as for course improvement) or for external purposes such as governments needing random, meaningless data for their own mysterious purposes.
Here is a library in Haiti. I don’t think its value could be easily evaluated and quantified. But its worth is priceless.