This post is a bit late, but among the top 50 biblioblogs for October, 2010, the top 10 student biblioblogs are:
Student | Overall | Author(s) | Blog | Alexa Score |
---|---|---|---|---|
1 | 2 | Joel L. Watts | Unsettled Christianity | 95521 |
2 | 8 | Scott Bailey | Scotteriology | 212042 |
3 | 12 | Jeremy Thompson | Free Old Testament Audio Website Blog | 294803 |
4 | 15 | Jonathan Robinson | Xenos | 300343 |
5 | 18 | Brian LePort, JohnDave Medina, and Robert Jimenez | Near Emmaus: Christ and Text | 382933 |
6 | 21 | Mark Stevens | Scripture, Ministry, and the People of God | 420079 |
7 | 22 | Phillip Long | Reading Acts | 431256 |
8 | 25 | S. Demmler | You Can’t Mean That! | 503362 |
9 | 26 | Gavin Rumney | Otagosh | 503927 |
10 | 29 | Bacho Bordjadze | Reading Isaiah | 533766 |
As always, updates and corrections are welcome, particularly for those who may have recently matriculated or graduated.
I’m curious: what criteria are used to determine the top 10 and top 50 blogs? Number of visitors? So popularity? Or?
The overall top 50 list that Jeremy maintains and produces is, as I understand it purely based on the, mostly traffic dependent, site rankings provided by the Alexa.com for the list of bibliobloggers that he has. I then work through this list and identify the subset of the top ten student bibliobloggers, obviously as measured by this same instrument. Thus far, there have always been at least ten student bibliobloggers in the list. If that happens to change in a given month, though, I might have to go with the top nine or something. 🙂
Thanks for the response. That is what I thought.
Why is popularity the primary criterion for importance? Should it be THE criterion? In other words, this list is not very useful for me. It might highlight a good blog, but good blogs are not always popular. The biblioblogs I follow (including yours!) are not on this list, nor the list of 50.
The whole idea of blog rating needs to be rethought from the ground up, IMO.
You are, of course, quite right. Judging “good books” simply to be those that appear on the New York Time’s Best-Seller List would be similarly problematic. That’s one reason that I don’t put too much weight on the qualitative accuracy of these lists. There are a number of blogs that are ranked lower on or are absent from the list—like yours—to which I tend to pay more attention and find more helpful than those that have higher Alexa rankings. So, qualitatively speaking, the monthly lists have mainly amusement value at present, for me at least, but of course, if there were a different metric or combination of metrics that could be used to produce a more qualitatively accurate ranking, then that would be very much preferable. There was some discussion about this issue a while back, but I don’t remember seeing, nor have I since conceived of, another workable metric or combination of metrics. As I write this reply comment, though, the whole issue does strike me as something that could benefit from an empirical-humanistic perspective that could perhaps put some more objective criteria to an otherwise more subjective, qualitative task. 🙂 Any thoughts?