?

Log in

No account? Create an account
Searching News from Syracuse University School of Information Studies
 
[Most Recent Entries] [Calendar View] [Friends]

Below are the 20 most recent journal entries recorded in Syracuse University School of Information Studies' LiveJournal:

[ << Previous 20 ]
Saturday, April 17th, 2010
1:41 pm
[monical_76]
Library of Congress, Google and Tweets

On April 14th the following was posted on the Twitter Blog http://blog.twitter.com/2010/04/tweet-preservation.html

Tweet Preservation

3163131087240274253

Wednesday, April 14, 2010

The Library of Congress is the oldest federal cultural institution in the United States and it is the largest library in the world. The Library's primary mission is research and it receives copies of every book, pamphlet, map, print, and piece of music registered in the United States. Recently, the Library of Congress signaled to us that the public tweets we have all been creating over the years are important and worthy of preservation.

Since Twitter began, billions of tweets have been created. Today, fifty-five million tweets a day are sent to Twitter and that number is climbing sharply. A tiny percentage of accounts are protected but most of these tweets are created with the intent that they will be publicly available. Over the years, tweets have become part of significant global events around the world—from historic elections to devastating disasters.

It is our pleasure to donate access to the entire archive of public Tweets to the Library of Congress for preservation and research. It's very exciting that tweets are becoming part of history. It should be noted that there are some specifics regarding this arrangement. Only after a six-month delay can the Tweets will be used for internal library use, for non-commercial research, public display by the library itself, and preservation.

The open exchange of information can have a positive global impact. This is something we firmly believe and it has driven many of our decisions regarding openness. Today we are also excited to share the news that Google has created a wonderful new way to revisit tweets related to historic events. They call it Google Replay because it lets you relive a real time search from specific moments in time.

Google Replay currently only goes back a few months but eventually it will reach back to the very first Tweets every created. Feel free to give Replay a try—if you want to understand the popular contemporaneous reaction to the retirement of Justice Stevens, the health care bill, or Justin Bieber's latest album, you can virtually time travel and replay the Tweets. The future seems bright for innovation on the Twitter platform and so it seems, does the past!
 
 
see also: http://www.wired.com/epicenter/2010/04/loc-google-twitter/  and http://www.loc.gov/today/pr/2010/10-081.html

QUESTIONS:
How do you think Library of Congress might 'arrange' these archives?

Do you see a 'value' in this collection, especially in light of the 'non-commercial research' caveat?

Do you think that Tweeters will become more aware/careful of what they publically tweet?

Is it 'healthy' to 'relive a moment in time'? Can this blur current reality?
 

Thursday, April 8th, 2010
12:07 am
[lcrowe55]
Murdoch Says Publishers Must Stand Up to Google, Bing
I found the following article, posted today, on Business Watch:

Rupert Murdoch, chairman and chief executive officer of News Corp., said newspaper publishers should prevent search engines like Google Inc. and Microsoft Corp.’s Bing from linking to full articles for free.

“It’s produced a river of gold, but those words are being taken mostly from the newspapers,” Murdoch, 79, said yesterday at a taping of “The Kalb Report” at the National Press Club in Washington. “I think they ought to stop it, that the newspapers ought to stand up and let them do their own reporting.”

Murdoch, who publishes the Wall Street Journal and Times of London, said news aggregators should be able to display only a headline, a couple of sentences and the option to subscribe to the publication. News Corp., based in New York, has started charging consumers for access to some of its newspapers’ Web sites, emulating the subscription model of the Wall Street Journal online.

To make their businesses work, publications, including New York Times Co.’s flagship newspaper, have to put up pay walls on the Internet to charge readers for access to content, Murdoch said.

“Most newspapers in this country are going to have to put a pay wall up,” he said. It’s unclear how high the wall needs to be, he said.

News Aggregators

Murdoch said the New York Times doesn’t seem to be able to make up its mind about how to implement its planned pay wall, which is supposed to start next year. He said it will likely face opposition from the newspaper’s columnists, who prefer the wider audience they get when their columns are free. Robert Christie, a spokesman for Times Co., declined to comment.

Murdoch said he has two computer screens in his office that display wsj.com and nytimes.com all day so he can compare the news on the sites.

Murdoch has been saying that online news aggregators, like Google, must pay to distribute articles from his company’s newspapers. News Corp. is considering blocking Google from displaying its articles and has talked to Bing about exclusively listing on that site, people familiar with the matter said in November.

Google News, Search

Publishers can charge for content while also making it discoverable through Google, the Mountain View, California-based company said today in an e-mailed statement. Google’s news and Web search functions send news organizations about 100,000 visits every minute, Google said.

“Publishers put their content on the Web because they want it to be found, so very few choose to exclude their material from Web search,” Google said in the statement. “Of the tens of thousands of news publishers who choose to make their articles part of Google News, over the lifetime of the service there have been only a few dozen that asked us not to include them.”

Google News gathers stories from the Web and displays their headlines, photos and the first few lines with links to the full articles on the original publishers’ Web sites. Publishers can allow Google’s search crawler to index content that is behind pay walls, letting users click through to a subscription article without cost.

Google said it respects publishers’ wishes and that there are easy ways to remove content if publishers don’t want it to appear in search results.

Publishers can limit to five a day the number of articles a reader can access for free through its search and news services, Google has said, as well as opt out completely.

New York Times Chairman Arthur Sulzberger Jr. said at a conference last month that he doesn’t have any antagonism toward Google, which sends a lot of traffic to the newspaper’s Web site, and thinks it’s important to work with the company.


Source: http://www.businessweek.com/news/2010-04-07/murdoch-says-publishers-must-stand-up-to-google-bing-update1-.html
___________________
Questions:

1.) What kind of implications would putting up pay walls on news articles have for college students? For academic librarians? For the public in general?

2.) Does Murdoch's argument that linking to full-text articles for free from major news sources make sense? Should newspapers make their content available online to the public for free? Murdoch says that "most newspapers in this country are going to have to put a pay wall up"-- do you think that newspapers will be forced to charge users for access to their articles? What do you think of this subscription model?

3.)How will this effect the newspapers and reporters themselves? As this article already states, most reporters like that their columns and articles can be freely accessed online because it gives them greater visibility. Do you think this will also affect how people search on engines like Google and Bing? Who will this affect negatively or positively (consider the public, the newspapers, the reporters, the search engines).
Saturday, March 27th, 2010
11:44 pm
[epenthetic]
Mobile search - Google, product search, and libraries
Google’s recent announcement of their update to product search had a mobile bent- product search results would now show an indication of whether the product was currently in stock at nearby bricks-and-mortar retailers. After reading this, my first thought was of Worldcat Mobile, a mobile phone interface for the Worldcat database that also uses a Google Maps interface to let the user indicate their location and show local libraries. OCLC launched the service last year in partnership with Boopsie, a mobile search provider. Worldcat Mobile started in a six month trial, but is now in an open beta.

Google's goal in integrating mobile search with other the company's other search tools is clear- mobile search revenues are an excellent tool to recruit business partners, whether they are online or bricks-and-mortar merchants, mobile phone manufacturers, or wireless network providers. Google keeps any revenue it draws from mobile devices that come with Google applications, but shares ad revenue from the ads that are part of mobile search. Libraries' potential uses for mobile technology are not so clear- or, said better, full of undefined possibilities. This raises some questions:

1. What should libraries role be in using mobile search? If an OCLC member library can draw patrons into the library from Worldcat Mobile, or use patrons' mobile devices as supplementary access points to the catalog- potentially freeing up space at overbooked computers- what are the next steps to take to further integrate a mobile library web interface?

2. In what ways could libraries begin using mobile resources to connect with other information resources in the community? Beyond the collection of books, DVDs, periodicals, microfilm, etc. a library is an asset to the community by acting as an information resource about the community. What sort of mobile-friendly tools could a community library offer to its members?

3. OCLC's partner Boopsie mentions integrating "catalog search, real-time integration, reading lists, your blogs and tweets." If we look at areas outside of the catalog, are there ways to make electronic resources available through mobile channels? Citation search, article databases, and other types of information resources with web interfaces could be powerful- if there's a way to use them from the small screen.
3:24 pm
[jrotoole1121]
Updates to Bing

On March 25th, at the Search Engine Strategies show in New York, representatives from Microsoft’s search engine Bing shared some changes currently being made to Bing in order to help people search more effectively. More specifically, these changes are geared towards helping people make better decisions. The changes primarily have to do with Bing’s Quick Tabs, which help users to refine their search queries, real-time search (in addition to Bing’s Twitter site), and Bing Maps.

One of the changes to Bing Maps includes a map application from Foursquare, a location-based networking service, which would allow people to zoom in on certain locations and read what local residents are saying about where to go and what to do. Yusuf Mehdi, senior vice president of the Online Audience Business for Bing, “said users will be eventually able to search Bing Maps to see if there is a line at the local Starbucks, or whether a bus is coming on time” (Boulton, 2010, para. 13). The idea is to make the maps as interactive and as close to real-time as possible.

Bing has been trying to end Google’s monopoly over general web search tools, and has been steadily gaining ground. These new updates, along with the specific mission of helping people make better decisions, may allow Bing to more successfully compete.

Questions:

1.       Do you think Bing’s goal of helping people to make better decisions makes searching on Bing significantly different from how Google’s search works?

2.       Do you think that Bing is real competition for Google, considering these new changes and its partnerships with other web services such as Twitter and Foursquare?

3.       What are the benefits and/or problems you see with future Bing Maps features?

References:

Boulton, C. (2010, March 25). Microsoft exec shows off Bing on Windows phone 7 series, Foursquare. Message posted to http://www.eweek.com/c/a/Search-Engines/Microsoft-Exec-Shows-Off-Bing-on-Windows-Phone-7-Series-Foursquare-581160/

Schwartz, T. (2010, March 25). New stuff coming from Bing this spring. Message posted to http://www.bing.com/community/blogs/search/archive/2010/03/24/new-stuff-coming-from-bing-this-spring.aspx


Sunday, March 14th, 2010
3:02 pm
[jascot04]
Finding Statistics on Google
In 2009, Google released a public data search feature that allows users to find useful statistics for research. Recently Goog added information from World Bank and is continuing to recruit the efforts of other public and non-profit agencies so more people can access the information world wide.

Google has done research on the type of statistics people are specifically looking for in order to include them in their Public Data Explorer. Some of the statistics that people are looking for include school comparisons, unemployment, salaries, exchange rates, crime statistics, and many, many more. Google will use this list in order to prioritize which statistics absolutely have to be included.


Google also plans on releasing Google Public Data Explore in Labs that will serve as a visualization tool for statistics. This service will not only include charts and graphs, but animate charts that will allow users to track changes over time as new information is coming in. Charts that are created based on statitics will be able to be embedded in blogs, websites, etc.

Schwarzler, J. (March 8, 2010). Statistics for a changing world: Google Public Data Explorer in Labs. Google Public Policy Blog. Retrieved from http://googlepublicpolicy.blogspot.com/2010/03/statistics-for-changing-world-google.html.

Questions:
1. How could this service from Google be useful?
2. What could this possibly mean for commercial databases that offer statistical information?
3. Would you be more prone to utilizing this service instead of using a commercial database?
Wednesday, March 10th, 2010
6:30 pm
[corcoranlms]
Google Mobile Offers Search Suggestions Based on Location

Has Google revolutionized the way searches are conducted on cell phones? Google points out that "typing a query into the search box on a phone can often be slow and difficult." To combat this, Google Mobile now has search suggestions based on the user's query and geographical location. It can be from the user's current or last location. For example, if I were in NYC and were looking for museums in the area, once I started typing "muse," search suggestions would be Museum of Natural History, Museum of Modern Art, Metropolitan Museum of Art and so forth. It would leave out "muse" results in other cities, and with one tap users would be able to go directly to results.

I tested this feature using Google Mobile on my Droid under the location of Syracuse, NY. I typed in "muse" and the Museum of Science and Technology (MOST) in downtown Syracuse appeared as the first entry. I also tried "cine" (eventually cinemas) and again it yielded local cinemas. Below is a visual example from Google's blog of how search suggestions would appear on Google Mobile.


 

                                        
  
The location feature can be activated or deactivated by going to "Settings" on the Google Mobile homepage. Currently, location search suggestions on Google Mobile are only available for the iPhone and Android, but in the future may be available on all smartphones.

 

Questions

1. What privacy issues might arise by enabling the location feature in Google Mobile?

2. Have you or would you utilize Google Mobile's location search suggestion feature on your mobile device? Why or why not?

3. What problems or inaccuracies might occur when using this feature?

4. Any other opinions or comments?

 


References:

Google. (2010 January 14). Optimized Search Suggestions using your location.
Google Mobile Blog.
Retrieved from http://googlemobile.blogspot.com/2010/01/optimized-search-suggestions-using-your.html

Sen, A. (2010 January 16). Google delivers location based search suggestions to mobile search. eBrandz. Retrieved from http://news.ebrandz.com/google/2010/3084-google-delivers-location-based-search-suggestions-to-mobile-search.html




-K. Regan
Thursday, March 4th, 2010
8:20 pm
[jocelynjo]
Semantic Search is coming....

Various developing semantic search tools are striving to change our relationship with search.  These new search engines attempt to make sense of our search results based on the search terms themselves.  Erik Schonfeld at TechCrunch wrote in January (Schonfeld, 2010) about the competition to develop the first user-friendly semantic search engine. In addition, Nova Spivack, the founder of the community Twine.com has released early demos of a semantic search tool called T2: http://www.novaspivack.com/uncategorized/twine-t2-latest-demo-screenshots-internal-beta.  Spivack’s screen shots show how the limiters can change from those appropriate to cooking to those appropriate to entertainment; only one possible impact of semantic search. 

One example available now on Bing is the recipe search.  Go to Bing.com and search on a food type, try chicken recipes or macaroni recipes, like the examples in the articles.  Click on Recipes in the left side bar under Results, and you are taken to a different set of results and limiters specific to cooking.  Content is provided by many partners including Epicurious.com, Delish.com (an MSN company), MyRecipes, and FatSecret.com. Google is experimenting with a tool called Google squared, but results from this work in progress are erratic.  The results from a search on pasta are very different from the results for a search on macaroni. Search on Shaun White and the results are more interesting.  Other experiments are ongoing at Hakia.com, Yebol.com, and Sensebot.com, to name a few. 

Results from these new search tools are dependent on the addition of meaning to the search terms through various techniques.  In the Spivack screen shots, one can see how meta-data is manually added to a web address, thereby creating an index that is further related to an overall ontology.   Other of the beta semantic search sites add meaning through metadata also, but the way by which the metadata is generated varies among the sites.  There are many more search tools (Pandia, 2009) experimenting with semantics in many forms, changing the way results are discovered, ranked for relevance, and displayed.  Are we ready? 

Questions

  • What will be the impacts of this type of searching? when the search tool knows that “chicken” and “recipes” together means that articles about rubber chickens are not needed?
  • How will these tools change the way we search?
  • Who gets to write the meta-data? and does it matter?
  • Are we so accustomed now to Google-style results, will we be ready for a new breed of search tools? 

References 

Google (2010).  Google squared. Retrieved from http://www.google.com/squared 

Pandia. (2009, February 16).  Top 5 semantic search engines.  Pandia Search Engine News. Retrieved from http://www.pandia.com/sew/1262-top-5-semantic-search-engines.html 

Schonfeld, E. (2010, January 22).  Bing, Google, and the enigmatic T2:  the race for a complete semantic search engine. [blog post]. TechCrunch. Retrieved from http://techcrunch.com/2010/01/22/t2-bing-google-radar-semantic-search-race/ 

Spivack, N. (2009, December 21).  Twine “T2” – Latest Demo Screenshots (Internal Alpha). [blog post]  Nova Spivack: Minding the Planet.  Retrieved from http://www.novaspivack.com/uncategorized/twine-t2-latest-demo-screenshots-internal-beta

9:16 pm
[dbfollett]
Google Real-Time Search
 

A recent article on Wired extols how Google’s algorithm rules the web even as it is considered a continuous work in progress.

There will be some 550 or so improvements to the fabled Google algorithm just this year. Each change will be decided upon at a gathering that includes engineers, product managers, and executives.

Following are the major seven milestone changes to the Google algorithm.  The last (my focus here) was put into place this past December 2009.

1.  New algorithm [August 2001] The search algorithm is completely revamped to incorporate additional ranking criteria more easily.

2.  Local connectivity analysis [February 2003] Google’s first patent is granted for this feature, which gives more weight to links from authoritative sites.

3.  Fritz [Summer 2003] This initiative allows Google to update its index constantly, instead of in big batches.

4.  Personalized results [June 2005] Users can choose to let Google mine their own search behavior to provide individualized results.

5.  Bigdaddy [December 2005] Engine update allows for more-comprehensive Web crawling.

6.  Universal search [May 2007] Building on Image Search, Google News, and Book Search, the new Universal Search allows users to get links to any medium on the same results page.

7.  Real-Time Search [December 2009] Displays results from Twitter and blogs as they are published.

After reading today (the local newspaper site Syracuse.com) about a shooting in the city’s Valley section, I was curious to see how real-time Google’s real-time search might be.   My simple test was the shooting victim’s name and age. 

My test on Google at 12:58 PM found the appropriate item that had been posted at 12:28 PM and I was fairly impressed.

 

syracuse.com Mobile

http://blog.syracuse.com/mobilenews/index.html

[Posted by John Mariani / The Post-Standard March 04, 2010, 12:28 PM]


Another interesting facet is watching the development  and marketing of a business  (or possible scam) over the course of a few weeks with this Real-Time Search algorithm in place.  When we first heard about Data Network Affiliates (DNA) and CEO Dean Blechman through videos on YouTube, DNA also showed up in the advertisement section of Google.  If you do a search today at Google on “Dean Blechman scam”,  you will find results for a couple of blogs that are said to report on MLM scams.  The current buzz is that Dean Blechman is no longer the CEO of DNA, though the company still exists as a viable business if you search on “Data Network Affiliates”.   A real-time observation at this point is that the urls change frequently for advertisements and  supporters /downlines for DNA.


1. Would you consider Real-Time Search a milestone?

 

2. Do you see any negative impacts from this real-time search ability?

-------------
Levy, S. (2010, March). Exclusive: How Google’s algorithm rules the web. Retrieved March 4, 2010, from http://www.wired.com/magazine/2010/02/ff_google_algorithm/all/1

Sunday, February 28th, 2010
12:07 am
[coliveri]
Google Personal Web Search
In December of 2009, Google announced that it was personalizing everyone's web search. Google tracks what you click on in search results and uses that to learn your search behavior. The tracking history is kept for 6 months and you can flush the history whenever you want. Tracking your behavior supposedly nets you more personalized search results, increasing the rank of hits from sites that you tend to visit more often. As this is only one "signal" in Google's PageRank Technology, supposedly, your activity will not overwhelm the search algorithm and leave you with a homogenized view of the web.


Questions:

1) Can this technology be used effectively in library OPACs and databases? Does it add any value?

2) This technology also works when not logged in. Should there be concerns about the implications to patron privacy and can these be addressed without rendering the technology useless?

Monday, February 22nd, 2010
3:49 pm
[smbadman_84]
Professional Affiliations help with Career and Personal Growth

 7:23p
Professional Affiliations help with Career and Personal Growth

The iSchool gave students who were enrolled in IST 511 a paid subscription to the American Library Association. This was very important to me in transitioning to the library professionals.  Because of my affiliation with the medical field, I also joined the Medical Library Association last fall. These subscriptions come with publications that offer current events opportunities and show librarians what is happening at the national level with our field. There are articles about practice and how to improve job functions such as searching.

One organization I would like to belong to is the American Society of Health Informatics Managers (ASHIM). According to a story I recently read  on the website of ASHIM at http://www.healthcare-informatics.com/ME2/dirmod.asp?sid=&nm=&type=news&mod=News&mid=9A02E3B96F2A415ABC72CB5F516B4C10&tier=3&nid=D23F1CA0FE014F188E05C10ED0BDD78D was that healthcare IT jobs are going to be needed in greater numbers in the future. There was a discussion about Health Information Technology jobs that are on the rise and the survey link showed between 50,000 to 200,000 positions to be needed in the next few years. This information technology job links directly to librarians and the field is opened to this type of transition in how librarians can fill these vital roles. In this current economic climate, it is refreshing to see a field where growth is expected and librarians/information specialists can fill this niche. I feel being a part of ASHIM will help me to be competitive in this growing field and help to direct me towards the right professional path. 


In May 2010, I will graduate from Syracuse University’s prestigious iSchool with a Master of Library and Information Science. I will now belong to another profession that is both exciting and interactive and will require continuing education. It is important to ask myself at this time, "How does a librarian add meaning to their career?".  Librarian can be good stewarts to their peers in many ways. They can participate in discussions with other information professionals about changes in finding information and how best to deal with these changes. Mentoring is a great way to nuture others in our field and encourage learning.  It will be a relief to graduate with a base knowledge of library science  but I really look forward to being part of a larger group of information leaders and growing as the profession grows.

As a current IST 637 student, where we are focused on advanced searching skills, it has become evident that unless there is constant evaluation of web tools and information retrieval systems then a librarian can quickly become dated. I think belonging to professional organizations is the key to staying current. Joining listservs, attending conferences and reading current publications will help librarians to stay current in the field. Graduation is a great time to celebrate but also a vital time to connect to the greater librarian community.

American Society of Health Informatics Managers HIT Jobs Survey. Retrieved February 21, 2010, from http://ashim.org/wp-content/uploads/2010/02/HIT-Jobs-Survey.pdf.

 

Questions

  • What groups or affiliations do others in IST 637 currently belong?
  • What associations do others in the class see themselves joining in the future?
  • How do you feel this will help with your career or the library field?

Friday, February 19th, 2010
9:14 am
[rburdick]

This Story comes from New York Birders.  The newsletter  for the New York State Ornithological Association, Inc.  (NYSOA)

 

About one year ago the group launched a program to create an online archive of The Kingbird, the organization’s quarterly journal dating back to 1950. NYSOA went with Integrated Filing Systems of Fairport NY to perform the scanning and optical character recognition work.  The president of the NYSOA designed and implemented the search engine based software solution and web application.  Over a period of six months files were uploaded to the groups’ server and were integrated into the online searchable database.  This searchable database contains about 8 million words on nearly 16,000 pages comprising more than 5,200 files. Additionally there is an online library of 229 full issues and 4 ten- year indices available for download on online browsing. The public archive and library will be updated at least once a year to ensure it always includes all but the most recent 8-12 issues of The Kingbird.

 

Pooth,C.,(2010). Now available online: 57 years of the kingbird in a keyword searchable archive.. New York Birders 39(1). January 2010, pp.1-3.

 

Questions---

1.       How would you find out about this database’s existence?  After all, it is accessible to the public but except for me telling you about it, you probably have never heard of this organization. There must be other groups out there almost as invisible.

2.       It is your job to create a new database such as this one, what fields do you include? Do you use free text searching or use a controlled vocabulary?  What limiters would you consider incorporating into the tool?

3.       Just for fun.  Did you ever consider creating databases as a possible career?

 

Thursday, February 18th, 2010
8:21 pm
[apbangs]
Back to the Future: IIPC Captures Yesterday

As you consider the future of Web search tools, how might they discover and record the past? Imagine a robot time traveling toward Web pages of long ago? When you hit a dead link where did it go?

The International Internet Preservation Consortium (IIPC) addresses these questions and more. Chartered in 2003 by 12 institutions, IIPC now numbers 39 member organizations (mostly libraries) worldwide. It serves to universally archive the Web, as stated in its mission "to acquire, preserve and make accessible knowledge and information from the Internet for future generations everywhere, promoting global exchange and international relations" (IIPC, 2010, Mission section, para. 1). Last week Resource Shelf introduced me to IIPC with this announcement about a new resource.

That is, IIPC Access Working Group's Web Archive, which alphabetically lists 23 archives by name, member institution (and its URL), date archive began, language(s), searching options, collecting (harvesting) methods, and a brief description. To know the Registry is to experience peculiarities, foreign languages and so forth. Heed helps. Not all links work. Explore member sites. Search Pandora (Australia's Web Archive) by URL, keyword, full-text, and browse alphabetically or by subject. Try any United States archive: Web Archiving Collection Service or WAX (Harvard University), Internet Archive (non-profit based in San Francisco), Web Archive (Library of Congress), and Web Archiving Service or WAS (California Digital Library). Right-hand images connect to corresponding archives. Peek in California's WAS site, and watch two instructional videos: Create and Capture Sites and Display and Analyze Results to learn more.

The International Internet Preservation Consortium is collaborative cache giant scale. From IIPC investigations, I've assembled more acronyms and picked up a little Web-archive lingo. Capture is a significant term, either a verb or a noun--you capture rather than retrieve results, and a result is a capture. IIPC's super crawler captures born-digital Web sites and Web pages, which a NutchWAX indexes. What's more, you search in the Wayback Machine. Most important, you're able to search Internet's bygone Web pages evermore. 

Activity:

Try the Wayback Machine.

Capture an older version of your favorite Web site by typing its URL into the basic search box. Browse around the Web page. Skip down for instructions about how to put a Wayback Machine link into your browser. When you’re on future sites, click it to see past ones (Internet Archive, 2001, Wayback machine “Web” Web page). Comments?

Questions:

  • Is the International Internet Preservation Consortium really necessary? 
  • What if you searched an IIPC member’s Web archive and captured your creative content from an old Web site? 
  • What potential legal ramifications ensue by archiving Web sites? 
  • What other problems or issues, technical and otherwise, emerge? 
  • Any predictions about IIPC’s future? 
  • What pivotal period or event is worth preserving? 
  • Along the same lines, “If you were a K12 student which websites would you want to save for future generations? What would you want people to look at 50 or even 500 years from now?” (Internet Archive, 2001, K-12 Web archiving program section, para. 1). 
  • Traveling closer to home, what will soon happen to http://libwww.syr.edu?


References

California Digital Library. (2009). Web archiving service [Two 10-minute videos]. Retrieved February 15, 2010, from http://webarchives.cdlib.org/p/projects/

IIPC. (2010). Member archives. Retrieved February 12, 2010, from http://www.netpreserve.org/about/archiveList.php 

IIPC. (2010). Mission. Retrieved February 14, 2010, from http://www.netpreserve.org/about/mission.php

Internet Archive. (2001, March 10) [Terms of use]. WayBack machine. Retrieved February 14, 2010, from http://www.archive.org/web/web.php

Price, G. & Kennedy, S. (Editors). (2010, February 10). Re: Internet archive registry--Organizations archiving the web. ResourceShelf. Message posted to http://www.resourceshelf.com/page/9/



Wednesday, February 10th, 2010
3:38 pm
[agboswor]
Google Social Search

Last year Google worked on an experiment called Social Search.  It was widely tested, but in October 2009 they announced that it will be available for all users.  The way it works is that in addition to searching the web for your query, it also searches your social network.  Google defines your social network as being Google contacts, friends of your contacts, and any contacts you have in your Google profile.  One downside is that you have to have a Google profile and you have to have friends!  Social Search will show results from your network’s blogs, tweets, and status updates from Facebook.  Although Google already started indexing some of these before, they rarely popped up on the first page and it was impossible to specify your network unless you included names of certain people in your query.  The new Google interface will search the web and at the bottom of the first page it will display results from your network.

From the article I read, they mentioned that this would be a great feature if there was a way to tag people for certain topics.  Say you had a friend that was a professor in mathematics.  If you could tag that person as being an expert in equations, calculus, algebra, etc. it would be much more useful and the experts in the field would rise to the top of your list so that you wouldn’t have to sift through “garbage”.  Another cool application for this new feature would be to create an inner circle of experts for a certain topic.  For example, Google could create a panel of the leading researches in marine biology and you could add that group as a contact.  This would be a powerful way to get specialized & reputable sources.

Overall this service will help to index and search information found in blogs, tweets, and other sources that are rarely indexed now, as long as you have a Google account.  One concern some people had with this service was that information from their contacts only came up with a very broad search.  Most librarians and researches are fairly specific in what they need and thus translate that into their searches.  This service may be useless for them.

Questions:

1.       Can you think of any more concerns there may be with this new service?

2.       Do you think this is countering Google’s mantra to be a simple and clutter free search engine?

3.       When would this service be most applicable and when would it be least applicable?

4.       Do any of you use Google contacts or have a Google profile?  Would any of these contacts have information you would want to come up in a search?

5.       Any other opinions on this service?  

Sources:

Official Google Blog.  “Introducing Google Social Search:  I Finally Found My Friend’s New York Blog!”  10/26/09, Retrieved from http://googleblog.blogspot.com/2009/10/introducing-google-social-search-i.html

Research Buzz Blog.  “Google Opens Up Its Social Search”.  2/1/10, Retrieved from http://www.researchbuzz.org/wp/page/4/  


Tuesday, February 9th, 2010
4:43 pm
[nwroth]
Really Smart Mobile Search
 SearchEngineLand reports that the new Siri app for the iphone (other mobile platforms coming soon) is technically not a search engine, but it does act like one--and more.  A Siri user simply speaks using natural, conversational language to ask a question, and Siri answers it by tapping into several different application programming interfaces such as Yelp, CitiSearch, Yahoo, GoogleMaps, TaxiMagic and others.  

In response to a request like "I'd like a reservation at X restaurant at 7:00", Siri would open a page where you could make that reservation in one click.  Siri acts less like a typical search tool and more like a personal assistant in this regard.  According to the Siri website, Siri can now help you find and plan things to do (the query, 'find a romantic place for dinner' makes Siri look for reviews of restaurants in your area that use the term romantic ),  but will soon be able to "handle reminders, flight stats, and reference questions".   Siri claims that its tool "adapts to user preference" and will eventually be able to "manage your to-do list".  

Questions:

1. How might Siri change the expectations of users of traditional search tools?  How might library patrons expect research queries to be conducted?

2.  What concerns do you have about a search tool that 'knows' you as comprehensively as Siri?

3.  Check out the full report at SearchEngineLand.  Do you think the '10 blue links' approach to search results is dead?







Thursday, April 9th, 2009
11:12 am
[jared_slear]
FTC plans regulations for online marketing

Word-of-mouth marketing is not exempt from the laws of truthful advertising

By Iain Thompson

The Federal Trade Commission (FTC) is planning to regulate online viral marketing that uses blogs and social networking sites.

Marketers are spending billions worldwide to get the endorsements of key bloggers and groups on social networking sites. One tactic, used by Microsoft and others, is to send products to bloggers on 'long-term loans' on the understanding that they will comment about them on their sites.

Under the new regulations being proposed, such bloggers would be legally liable if they make untrue statements about the products or services. The companies too would face sanctions.

"This impacts every industry and almost every single brand in our economy, and that trickles down into social media," Anthony DiResta, an attorney representing several advertising associations, told The Financial Times.

This is the first revision of the rules on this kind of advertising by the FTC since 1980 and is needed, according to the organisation, because new forms of communication have opened up new fields to marketing.

"The guides needed to be updated to address not only the changes in technology, but the consequences of new marketing practices," said Richard Cleland, assistant director for the FTC's division of advertising practices. " Word-of-mouth marketing is not exempt from the laws of truthful advertising."

Advertisers are resisting the changes, however, which threaten a highly effective form of marketing new products and services.

"Regulating these developing media too soon may have a chilling effect on blogs and other forms of viral marketing, as bloggers and other viral marketers will be discouraged from publishing content for fear of being held liable for any potentially misleading claim," Richard O'Brien, vice president of the American Association of Advertising Agencies, said in an advisory to the FTC (Thompson, 2009).


Questions:

1.  Viral marketing refers to, "...marketing techniques that use pre-existing social networks to produce increases in brand awareness or to achieve other marketing objectives (such as product sales)" (Wikipedia, 2009).  Essentially, these marketing techniques rely upon users passing along some "slogan" or "message" to all their friends.  An example is the text at the bottom of every email sent via Yahoo or MSN, which invites the recipient to sign-up for said email program. 

Having said all that, what is your impression of the FTC trying to regulate this type of advertising?  Is such regulation necessary; especially in a subjective environment like the "blogosphere" where users may simply be expressing opinions of various products or services anyway?


2.  The article mentions that companies will sometimes send products to the owners the blog owners on "long term loans" in hopes that such individuals will mention the product in future posts.  Is this this an ethical way to gain product exposure?  Or does this amount, essentially, to just paying bloggers to say nice things about a particular product?

3.  An increased number of search results obtained via popular search engines are linking users to popular blogs, especially if the information is unavailable elsewhere on the web.  Since, again, many of these blogs are simply places for individuals to go to express their opinions of some subject or another (and it seems that companies can use marketing techniques to influence, or even exploit, bloggers opinions of these subjects), do you think that blogs should even be included in search results?  Are there special circumstances where blogs should be displayed as results, while in all other circumstances they should not?  


Sources:

Thompson, I.  (2009).  FTC plans regulations for online marketing.  Retrieved April 9, 2009, from http://www.vnunet.com/vnunet/news/2239850/ftc-plans-regulations-online

Wikipedia.  (2009).  Viral Marketing.  Retrieved April 9, 2009, from http://en.wikipedia.org/wiki/Viral_marketing

Wednesday, April 1st, 2009
4:57 pm
[kmroddy]
Specialist search tool facing legal pressure


The search tool SeeqPod is primarily used to find music on the Web. The search algorithm used finds mp3s and other audio files hosted on both personal Web sites and commercial Web sites. The search tool does not differentiate between those that may be in violation of copyright and those that are permitted by copyright holders. Since SeeqPod started in 2006 it has received attention from the recording industry, and as this article indicates the mounting legal pressure has prompted the owners of SeeqPod's advance searching algorithm to consider licensing it to third parties. This would have the effect of making it even harder for the recording companies to control these kinds of searches. SeeqPod considers what its search tool provides as in no way different from what Google can do (it is relatively easy to create a search in Google that limits results to mp3s or other types of audio files), and is therefore protected under the "safe harbor" provisions of the Digital Millenium Copyright Act . This case, along with similar cases involving other specialist search tools, raises many interesting questions and concerns. Among those are the following:

Is SeeqPod really doing anything different from what Google is doing (providing search results)?

Should search tools be required to filter results to avoide violations of copyright?

Do information professionals, such as reference librarians, have a responsibity not make searchers aware of controversial search tools such as SeeqPod? 

Sunday, March 29th, 2009
5:58 pm
[sgepage]
Changes coming to Google?


Attention Google lovers! Changes may be coming to Google!

 

Google Blogoscoped reports on experimental options showing up in some Google users’ search results pages. These options include limiters, sorting options, display options, and other options, as listed below:

 

  • Limiters:  recent results only, videos only, forum entries only, reviews only, specific time ranges only
  • Sorting options:  relevance, date
  • Display options: standard results, images from the page, more text
  • Other options:  timeline, search suggestions, wonder wheel 

The “wonder wheel” is a interactive keyword display map of search results. The keyword search query appears at the center of the wheel, and terms related to the search query appear as “spokes” of the wheel. When a user clicks on one of the spokes, the wheel regenerates with the spoke now at the center of the wheel. At the same time, the listing of search results changes to reflect the updated search query.

 

According to Google Blogoscoped, anyone can “sign up” for the experiment by going to Google, copying and pasting the following text into the address bar, and hitting ENTER:

 

javascript:void(document.cookie="PREF=ID=4a609673baf685b5:TB=2:LD=en:CR=2:TM=1227543998:LM=1233568652:DV=AA:GM=1:IG=3:S=yFGqYec2D7L0wgxW;path=/; domain=.google.com");

 

Doing this “will set a cookie telling Google you’re taking part in the prototype” (Ruscoe & Lenssen, 2009).

 

Questions for discussion:

 

1. In the past few years, many commercial search databases have revamped their interfaces to include a single Google-esque search box. Google’s current experiment seems to be an example of things moving in the opposite direction. Do you think Google’s new display options are intended to mimic the limiters and sorting options already available in commercial search database interfaces? Will this move increase the legitimacy of Google among librarians and educators?

 

2. Google’s “wonder wheel”  is strikingly similar to the interactive maps touted by visual search engines such as Quintura and Kartoo. If the “wonder wheel” becomes a permanent fixture of Google, does it spell the end of other visual search engines?

Sunday, March 22nd, 2009
4:46 pm
[kmlulofs]
ReadWriteWeb
ReadWriteWeb is an interesting tech website I heard about a couple of weeks ago. After checking the site every so often I did notice they change their topic of conversation fairly regulary.  This week some of the topics are about computer coding. However the topics are not hard to understand. As a person who does understand computer coding all that well, it is nice to see a site that does talk about it without making the user feel completely stupid.
While the site is not for everyone, it is a nice place to go and learn what is on the up and up with computers and technology.
In case the link doesnt work its http://www.readwriteweb.com/
Thursday, March 5th, 2009
11:55 am
[rmlohner]
Google Search's "Vince" Change
http://searchengineland.com/google-searchs-vince-change-google-says-not-brand-push-16803

For the past week, many Google users have become concerned with a change in the site's search rankings, feeling that the site is now too concerned with big name brands. Google employee Matt Cutts was quick to reply that this was a minor change and he believes the majority of Google users would not even have noticed. He maintains that the change is simply a new way to live up to the tool's mission of returning high quality results as quickly as possible. According to Cutts, the biggest factor is which pages they can trust to have quality material.

1. Have you noticed the change, and if so what did you think about it at the time?

2. How much do you think Cutts' statement is motivated by public relations, as opposed to simply giving the facts?

3. Does this change how you view Google's usefulness?
Saturday, February 28th, 2009
8:55 pm
[aleonhardt]
LeapFish Easily Swims Through Multiple Search Ponds
TechNewsWorld, 04 February 2009, by Jack Germain

http://www.technewsworld.com/story/search-tech/66044.html

This article reviews a new search engine launched in January by a California-based software company.  The site, leapfish.com, offers a "multi-dimensional search aggregator that combines several features to provide more focused results" (Germain, 2009).  By entering a search query, users can view the results of Google, MSN, and Yahoo!, as well as links to relevant videos, images, news articles, blogs, etc. 

A few things make this search tool unique.  The first is that users do not have to click a search button to retrieve results.  Instead, results appear "real time" as you type.  A drop-down box (similar to Google's) also appears as you type giving suggested queries.  Second, unlike Google, all types of results appear on the same page.  The web results from the selected engine are located on the left-hand side of the page; on the right, results in different formats (videos, blogs, Q&A, etc.) appear, and allow the user to view a thumbnail image by scrolling over them.  The user is not required to select the format they want their results in, as with Google.  Third is the speed at which the site does this.  Other search tools have attempted to aggregate information in a similar fashion as LeapFish; however, most are relatively slow at retrieval. 

According to the article, the intention of the site is not to compete with Google or Yahoo!; instead, it wants to increase accessibility to the information available online and present it in a well-organized, easy-to-use interface.  It almost sounds too perfect, and so I pose these questions to you:

1)  Though LeapFish claims to not want to compete with major search engines, do you think it poses a threat to them?  Why or why not?

2)  Will a search tool like LeapFish increase awareness of the volume of information available on the Web or will it turn people into lazier searchers?

3)  Check out the site.  What is your opinion of it?  Do you think it will change the way general Web search tools operate in the future? 


[ << Previous 20 ]
About LiveJournal.com