When Google cataloged its one-trillionth web page last year, it seemed like an event of epistemological proportions. Trillions aren’t just bandied about—unless we are talking about the federal deficit or China’s foreign currency reserves.
Though such a figure is mind-boggling and signifies an unthinkable amount of content accessible to anyone with an internet connection, it is really only a fraction of the information that could be mined. There are still databases of information waiting to be added to the public domain from corporations, governments and universities.
Enter Deep Peep, a National Science Foundation supported project based at the University of Utah that aims to probe the web deeper than any search engine has gone before. Similar to the Semantic Web, Deep Peep aims to develop complex computational models to mine currently inaccessible information.
Johnathan Zittrain, author of The Future of the Internet and How To Stop It, is one of the bigger proponents of new navigation tools for the web. Listen to his interview with Stanford University Radio here and his also his comments when he sat down with Big Think.
Chances are if you frequent Big Think you spend a significant amount of time on the web. Let us know how you have been faring with your Google searches. Is there enough content out there in Web 2.0 or is it time for a new iteration?