Tuesday, August 21, 2012

Find Who is invisible or blocked you on Google Talk

steps to find who is invisible or blocked you on gtalk :
1) Download & Install Pidgin
Click here to download Pidgin chat client. If you already have Pidgin installed, you may skip this step.
2) Configure Pidgin for Gtalk
You’ll probably start with the below screen. Click the Add button. “Accounts -> Manage Account” will also bring you to the same screen. Let’s add Gtalk to Pidgin.


Pidgin welcome screen
Clicking Add will allow you to add new Gtalk account. The following two screenshots show what you need to fill up for Basic and Advance tab.
Pidgin add account basic tab

Pidgin add account advanced tab
With all the settings properly entered, you should be able to connect to Gtalk and load your contacts successfully.
3) Find Who’s Blocking You
When someone blocked you in Gtalk (and other IMs), they appear offline just like your other contacts who are really offline. Right click, click on Get Info, and we’ll see how to differentiate them.

Gtalk get info
The following image is a comparison of 2 different contacts: Actual offline (left) and Blocked offline (right). If you are blocked, nothing will display under Buddy Information.
Gtalk buddy information
Thats all. Now you can easily find out who is really offline and who is blocking you from google chat.

Running Multiple Instances of Google Talk

Users of Google Talk (GTalk) can also let GTalk go to polygamy, that\92s running multiple instances of Google Talk and login to multiple Google accounts on Google Talk. The polygamy trick can be done without any crack, patch or hack, with just a simple command line parameter or switch/nomutex appended to the Google Talk shortcut.

Ability to polygamy running multiple Google Talk is useful if users have multiple Google Talk accounts (or Google or Gmail accounts that used to login to GTalk) or multiple profiles or personalities, and don\92t want to log on and off from one account to another account every time when want to switch, or want to log in to all accounts at the same time on the same computer.

You can add the /nomutex switch or parameter to existing Google Talk shortcut, or create a new shortcut with the /nomutex command line parameter.

To edit existing Google Talk shortcut:

1) Right click on the Google Talk shortcut.
2) On the right click contextual menu, click on Properties.
3) Go to Shortcut tab on Google Talk Properties window.
4) On the Target textbox, add in the /nomutex to the end of the line so that it looks like below (or you can simply copy and paste the below syntax and replace the original).

Target: "C:\Program Files\Google\Google Talk\googletalk.exe" /nomutex

5) Click on OK.


To create a new shortcut for Google Talk:

1) Right-click on the desktop or anywhere you want to place the GTalk shortcut.
2) Select New on the right click context menu.
3) Then select Shortcut.
4) Copy and paste the following line to the text box when prompted to type the location of the item:

\93C:\Program Files\Google\Google Talk\googletalk.exe\94 /nomutex

5) Click on Next.
6) Give the shortcut a proper name such as Google Talk or Google Talk Multiple or Google Talk Polygamy.
7) Click OK until you are done.

If you have hex editor, you can act like a hacker and modify the bits in Google Talk program so that it will always allow multiple instances of GTalk to be launched whether the /nomutex switch is specified or not.

Launch hex editor and open googletalk.exe, then search for the following patterns in the hex editor:

004536FD . 3BC6 CMP EAX,ESI
004536FF . 75 05 JNZ SHORT googleta.00453706

Modify the string to look like the following:

004536FD . 8BC1 MOV EAX,ECX
004536FF . EB 05 JMP SHORT googleta.00453706


How this Works?
The mutex is short for mutual exclusion object.
A mutex is a program object that allows multiple program threads to share the same resource, but not simultaneously.

So, in the hack above, we used nomutex (no-mutex) to use the same resources simultaneously....!

Creating IM Bot, How To Create IM Bot

This quick tutorial will show you how to develop your own functional IM bot that works with Google Talk, Yahoo! Messenger, Windows Live and all other popular instant messaging clients
To get started, all you need to know are some very basic programming skills (any language would do) and web space to host your “bot”.
For this example, I have created a dummy bot called “insecure” that listens to your IM messages. To see this live, add insecure@bot.im to your GTalk buddy list and start chatting.


IM Bot

If you like to write a personal IM bot, just follow these simple steps:-
Step 1: Go to www.imified.com and register a new account with a bot.
Step 2: Now it’s time to create a bot which is actually a simple script that resides on your public web server
It could be in PHP, Perl, Python or any other language.
Example Hello World bot:
The example below illustrates just how easy it is to create a bot. 
This example is coded in PHP.

switch ($_REQUEST['step']) {
case 1:
echo "Hi, what's your name?";
break;
case 2:
echo "Hi " . $_REQUEST['value1'] . ", where do you live?";
break;
case 3:
echo "Well, welcome to this hello world bot, " . $_REQUEST['value1'] . "
from " . $_REQUEST['value2'] . ".";
break;
}

?>
Step 3: Once your script is ready, put it somewhere on your web server and copy the full URL to the clipboard.
Step 4: Now login to your imified account, paste the script URL

Screen Name: insecure@bot.im
Bot Script URL: http://www.insecure.in/imbot.php

Step 5: Add that im bot your friends list. That’s it.
This is a very basic bot but the possibilities are endless.
For instance, you could write a bot that will send an email to all your close friends via a simple IM message. Or you could write one that will does currency conversion.

Monday, August 20, 2012

What is Google Analytics






Optimization of web site for search engines is one factor, however how does one grasp if all of your exertions is paying off? Google offers a free service of Analytics to webmasters that tracks visits in multiple ways. This time a lot of massive organizations, Governments Institutions, firms and individuals use Google Analytics to trace their net traffic.

While webmasters and bloggers might receive additional edges from maintaining with deep statistics provided by Google Analytics, people that manage their personal or business websites / blogs may also use these tools to higher perceive what drives individuals to their sites.





Dashboard:

There are loads of reports you'll read through Google Analytics, however those can take longer and additional in-depth coverage to be told. For now, get at home with your Dashboard. This is often the most page, the start line, for your Google Analytics data. Here, you’ll see a 30-day graph for total visits, the time amount shown is customizable. 


You’ll see a snapshot of web site Usage, as well as statistics for Visits, Page views, bounce Rate, average time on web site, and proportion of recent Visits. Additional down the page, you’ll see a map, which highlights where your views are returning from. You’ll additionally see a pie chart containing data regarding how guests are becoming to your web site. Are you guests returning from search engine results, from referring sites and direct traffic. This data will quickly clue you in to your strengths and weaknesses.

On the dashboard you’ll even be able to see your prime content. Adding recent content within the same vein as your prime performing content can offer your repeat viewers some new material and can doubtless draw additional distinctive guests to your web site.



Knowing about Visitors:

It’s vital to understand who is visiting your web site. Are your guests first-time viewers? How long are they staying on your web site? What percentage pages inside your website are they visiting? What percentage distinctive guests does one have? With Google Analytics, you'll see a close version of this important data.



If you learn that the majority individuals keep on your web site a really short quantity of your time, try and conclude why. Is your style pleasing and easy, or is it cluttered and onerous to read? If you discover that the bulk of your guests have visited your web site before, then you recognize you’re doing one thing right. Having repeat viewers is simply as vital as bringing new, distinctive traffic to your web site.

To evaluate if the SEO steps you’ve taken are operating (although that’s a tough factor to live simply with many clicks!), attempt downloading a Firefox plug-in to reinforce your Google Analytics. Here’s one from Juice Analytics that provides you information on your new referring sites and your keywords with five hundredth higher traffic over the past week.



This doesn’t even cowl the tip of the iceberg that's Google Analytics. To actually be well versed within the GA scene, you’ll got to pay your time clicking around, browsing forums, and viewing totally different reports. Different information are going to be useful to different users, thus learning what you'll do on GA can open your eyes to a full new method of tracking your net traffic.

How to Get Shortest URL Google Plus Account?

If U Already using Google+? Follow Softwares & Tips for the latest about the platform’s new features, tips and tricks as well as social media and technology updates.




Unlike Facebook and Twitter, Google+ does not offer vanity URLs for user profiles, making it more difficult to share your Google+ Profile with your friends. Instead, Google uses a long string of numbers to denote users (e.g. 103147437714360421464)

The reason Google+ does not use vanity URLs is because those could lead to spammers figuring out the email addresses of millions of Google+ users (since many Google Accounts are linked to Gmail accounts).

Of course, this leads to a problem: you don’t want to be telling people to type in a long string of numbers to find your Google+ Profile. That’s where Gplus.to comes in. This simple little app lets you create a short URL for your Google+ page, making it easy to share with your friends. Mine, for example, is

http://gplus.to/PcSofttips, which is much easier to remember than a random strong of numbers. 

Gplus.to fills a gap that Google+ doesn’t currently address. Should Google give users the ability to create vanity URLs, or is it too much of a privacy concern because of its connection to Gmail? Feel free to let us know in the comments.

How to view zip / rar files online using Google Docs

You must have had received several zip files from your friends and colleagues before and wondered “who’ll download and then open, see whats inside?” and ignored the mail due to laziness or found a zip file online and wanted to see the contents before downloading? Well many do that and myself have done that many times before



Here’s a solution for those, Google Docs added new feature to view the contents on a zip file online this means no need to download and view it online just like a pdf or doc file or slideshows. If the files inside the zip files are the traditional files such as .jpg or .doc or .pdf, you can finish of the whole compressed file without even downloading

That is for gmail of course, if you want to see the contents of a rar /zip file found online then just paste the url link to that file in the google doc viewer, it will do the rest job.

Google Page Rank

Grand Valley State University Imagine a library containing 25 billion documents but with no centralized organization and no librarians. In addition, anyone may add a document at any time without telling anyone. You may feel sure that one of the documents contained in the collection has a piece of information that is vitally important to you, and, being impatient like most of us, you'd like to find it in a matter of seconds. How would you go about doing it? Posed in this way, the problem seems impossible. Yet this description is not too different from the World Wide Web, a huge, highly-disorganized collection of documents in many different formats. Of course, we're all familiar with search engines (perhaps you found this article using one) so we know that there is a solution. This article will describe Google's PageRank algorithm and how it returns pages from the web's collection of 25 billion documents that match search criteria so well that "google" has become a widely used verb. Most search engines, including Google, continually run an army of computer programs that retrieve pages from the web, index the words in each document, and store this information in an efficient format. Each time a user asks for a web search using a search phrase, such as "search engine," the search engine determines all the pages on the web that contains the words in the search phrase. (Perhaps additional information such as the distance between the words "search" and "engine" will be noted as well.) Here is the problem: Google now claims to index 25 billion pages. Roughly 95% of the text in web pages is composed from a mere 10,000 words. This means that, for most searches, there will be a huge number of pages containing the words in the search phrase. What is needed is a means of ranking the importance of the pages that fit the search criteria so that the pages can be sorted with the most important pages at the top of the list. One way to determine the importance of pages is to use a human-generated ranking. For instance, you may have seen pages that consist mainly of a large number of links to other resources in a particular area of interest. Assuming the person maintaining this page is reliable, the pages referenced are likely to be useful. Of course, the list may quickly fall out of date, and the person maintaining the list may miss some important pages, either unintentionally or as a result of an unstated bias. Google's PageRank algorithm assesses the importance of web pages without human evaluation of the content. In fact, Google feels that the value of its service is largely in its ability to provide unbiased results to search queries; Google claims, "the heart of our software is PageRank." As we'll see, the trick is to ask the web itself to rank the importance of pages. 


How to tell who's important


If you've ever created a web page, you've probably included links to other pages that contain valuable, reliable information. By doing so, you are affirming the importance of the pages you link to. Google's PageRank algorithm stages a monthly popularity contest among all pages on the web to decide which pages are most important. The fundamental idea put forth by PageRank's creators, Sergey Brin and Lawrence Page, is this: the importance of a page is judged by the number of pages linking to it as well as their importance. We will assign to each web page P a measure of its importance I(P), called the page's PageRank. At various sites, you may find an approximation of a page's PageRank. (For instance, the home page of The American Mathematical Society currently has a PageRank of 8 on a scale of 10. Can you find any pages with a PageRank of 10?) This reported value is only an approximation since Google declines to publish actual PageRanks in an effort to frustrate those would manipulate the rankings. Here's how the PageRank is determined. Suppose that page Pj has lj links. If one of those links is to page Pi, then Pj will pass on 1/lj of its importance to Pi. The importance ranking of Pi is then the sum of all the contributions made by pages linking to it. That is, if we denote the set of pages linking to Pi by Bi, then \[ <br />I(P_i)=\sum_{P_j\in B_i} \frac{I(P_j)}{l_j}<br /> \]  This may remind you of the chicken and the egg: to determine the importance of a page, we first need to know the importance of all the pages linking to it. However, we may recast the problem into one that is more mathematically familiar. Let's first create a matrix, called the hyperlink matrix, $ {\bf H}=[H_{ij}] $  in which the entry in the ith row and jth column is\[ <br />H_{ij}=\left\{\begin{array}{ll}1/l_{j} & <br /> \hbox{if } P_j\in B_i \\<br /> 0 & \hbox{otherwise}<br /> \end{array}\right.<br /> \]  Notice that H has some special properties. First, its entries are all nonnegative. Also, the sum of the entries in a column is one unless the page corresponding to that column has no links. Matrices in which all the entries are nonnegative and the sum of the entries in every column is one are called stochastic; they will play an important role in our story. We will also form a vector $ I=[I(P_i)] $  whose components are PageRanks--that is, the importance rankings--of all the pages. The condition above defining the PageRank may be expressed as \[ <br />I = {\bf H}I<br /> \]  In other words, the vector I is an eigenvector of the matrix H with eigenvalue 1. We also call this a stationary vector of H. Let's look at an example. Shown below is a representation of a small collection (eight) of web pages with links represented by arrows. Google Page Rank Explained - The Ethical Hacking The corresponding matrix is






Google Page Rank Explained - The Ethical Hackingwith stationary vectorGoogle Page Rank Explained - The Ethical Hacking
This shows that page 8 wins the popularity contest. Here is the same figure with the web pages shaded in such a way that the pages with higher PageRanks are lighter. Google Page Rank Explained - The Ethical Hacking


Computing I


There are many ways to find the eigenvectors of a square matrix. However, we are in for a special challenge since the matrix H is a square matrix with one column for each web page indexed by Google. This means that H has about n = 25 billion columns and rows. However, most of the entries in H are zero; in fact, studies show that web pages have an average of about 10 links, meaning that, on average, all but 10 entries in every column are zero. We will choose a method known as the power method for finding the stationary vector I of the matrix H. How does the power method work? We begin by choosing a vector I 0 as a candidate for I and then producing a sequence of vectors I k by \[ <br />I^{k+1}={\bf H}I^k<br /> \]  The method is founded on the following general principle that we will soon investigate.




General principle: The sequence I k will converge to the stationary vector I.
We will illustrate with the example above.



















































































I 0I 1I 2I 3I 4...I 60I 61
10000.0278...0.060.06
00.50.250.16670.0833...0.06750.0675
00.5000...0.030.03
000.50.250.1667...0.06750.0675
000.250.16670.1111...0.09750.0975
0000.250.1806...0.20250.2025
0000.08330.0972...0.180.18
0000.08330.3333...0.2950.295
It is natural to ask what these numbers mean. Of course, there can be no absolute measure of a page's importance, only relative measures for comparing the importance of two pages through statements such as "Page A is twice as important as Page B." For this reason, we may multiply all the importance rankings by some fixed quantity without affecting the information they tell us. In this way, we will always assume, for reasons to be explained shortly, that the sum of all the popularities is one.


Three important questions


Three questions naturally come to mind:



  • Does the sequence I k always converge?

  • Is the vector to which it converges independent of the initial vector I 0?

  • Do the importance rankings contain the information that we want?

Given the current method, the answer to all three questions is "No!" However, we'll see how to modify our method so that we can answer "yes" to all three. Let's first look at a very simple example. Consider the following small web consisting of two web pages, one of which links to the other:






Google Page Rank Explained - The Ethical Hackingwith matrixGoogle Page Rank Explained - The Ethical Hacking
Here is one way in which our algorithm could proceed:

















I 0I 1I 2I 3=I
1000
0100
In this case, the importance rating of both pages is zero, which tells us nothing about the relative importance of these pages. The problem is that P2 has no links. Consequently, it takes some of the importance from page P1 in each iterative step but does not pass it on to any other page. This has the effect of draining all the importance from the web. Pages with no links are called dangling nodes, and there are, of course, many of them in the real web we want to study. We'll see how to deal with them in a minute, but first let's consider a new way of thinking about the matrix H and stationary vector I.


A probabilitistic interpretation of H


Imagine that we surf the web at random; that is, when we find ourselves on a web page, we randomly follow one of its links to another page after one second. For instance, if we are on page Pj with lj links, one of which takes us to page Pi, the probability that we next end up on Pi page is then $ 1/l_j $  . As we surf randomly, we will denote by $ T_j $  the fraction of time that we spend on page Pj. Then the fraction of the time that we end up on page Pi page coming from Pj is $ T_j/l_j $  . If we end up on Pi, we must have come from a page linking to it. This means that \[ <br />T_i = \sum_{P_j\in B_i} T_j/l_j<br /> \]  where the sum is over all the pages Pj linking to Pi. Notice that this is the same equation defining the PageRank rankings and so $  I(P_i) = T_i $  . This allows us to interpret a web page's PageRank as the fraction of time that a random surfer spends on that web page. This may make sense if you have ever surfed around for information about a topic you were unfamiliar with: if you follow links for a while, you find yourself coming back to some pages more often than others. Just as "All roads lead to Rome," these are typically more important pages. Notice that, given this interpretation, it is natural to require that the sum of the entries in the PageRank vector I be one. Of course, there is a complication in this description: If we surf randomly, at some point we will surely get stuck at a dangling node, a page with no links. To keep going, we will choose the next page at random; that is, we pretend that a dangling node has a link to every other page. This has the effect of modifying the hyperlink matrix Hby replacing the column of zeroes corresponding to a dangling node with a column in which each entry is 1/n. We call this new matrix S. In our previous example, we now have








Google Page Rank Explained - The Ethical Hackingwith matrixGoogle Page Rank Explained - The Ethical Hackingand eigenvectorGoogle Page Rank Explained - The Ethical Hacking
In other words, page P2 has twice the importance of page P1, which may feel about right to you. The matrix S has the pleasant property that the entries are nonnegative and the sum of the entries in each column is one. In other words, it is stochastic. Stochastic matrices have several properties that will prove useful to us. For instance, stochastic matrices always have stationary vectors. For later purposes, we will note that S is obtained from H in a simple way. If A is the matrix whose entries are all zero except for the columns corresponding to dangling nodes, in which each entry is 1/n, then S = H + A.


How does the power method work?


In general, the power method is a technique for finding an eigenvector of a square matrix corresponding to the eigenvalue with the largest magnitude. In our case, we are looking for an eigenvector of S corresponding to the eigenvalue 1. Under the best of circumstances, to be described soon, the other eigenvalues ofS will have a magnitude smaller than one; that is, $ |\lambda|  if  is an eigenvalue of S other than 1.   We will assume that the eigenvalues of <b>S</b> are <IMG alt= and that \[ <br />1 = \lambda_1 > |\lambda_2| \geq |\lambda_3| \geq \ldots \geq<br />|\lambda_n| <br /> \]  We will also assume that there is a basis vj of eigenvectors forS with corresponding eigenvalues $ \lambda_j $  . This assumption is not necessarily true, but with it we may more easily illustrate how the power method works. We may write our initial vector I 0 as \[ <br />I^0 = c_1v_1+c_2v_2 + \ldots + c_nv_n<br /> \]  Then <br />\begin{eqnarray*}<br />I^1={\bf S}I^0 &=&c_1v_1+c_2\lambda_2v_2 + \ldots + c_n\lambda_nv_n \\<br />I^2={\bf S}I^1 &=&c_1v_1+c_2\lambda_2^2v_2 + \ldots +<br />c_n\lambda_n^2v_n \\<br />\vdots & & \vdots \\<br />I^{k}={\bf S}I^{k-1} &=&c_1v_1+c_2\lambda_2^kv_2 + \ldots +<br />c_n\lambda_n^kv_n \\<br />\end{eqnarray*}<br />  Since the eigenvalues $ \lambda_j $  with $ j\geq2 $  have magnitude smaller than one, it follows that $ \lambda_j^k\to0 $  if $ j\geq2 $  and therefore $ I^k\to I=c_1v_1 $  , an eigenvector corresponding to the eigenvalue 1. It is important to note here that the rate at which $ I^k\to<br />I $  is determined by $ |\lambda_2| $  . When $ |\lambda_2| $  is relatively close to 0, then $ \lambda_2^k\to0 $  relatively quickly. For instance, consider the matrix \[ <br />{\bf S} = \left[\begin{array}{cc}0.65 & 0.35 \\ 0.35 & 0.65<br />\end{array}\right]. <br /> \]  The eigenvalues of this matrix are $ \lambda_1=1 $  and $ \lambda_2=0.3 $  . In the figure below, we see the vectors I k, shown in red, converging to the stationary vector I shown in green. Google Page Rank Explained - The Ethical Hacking Now consider the matrix \[ <br />{\bf S} = \left[\begin{array}{cc}0.85 & 0.15 \\ 0.15 & 0.85<br />\end{array}\right]. <br /> \]  Here the eigenvalues are $ \lambda_1=1 $  and $ \lambda_2=0.7 $  . Notice how the vectors I k converge more slowly to the stationary vector I in this example in which the second eigenvalue has a larger magnitude. Google Page Rank Explained - The Ethical Hacking


When things go wrong


In our discussion above, we assumed that the matrix S had the property that $ \lambda_1=1 $  and $  |\lambda_2| . This does not always happen, however, for the matrices S that we might find.   Suppose that our web looks like this:        In this case, the matrix <b>S</b> is   <IMG src=Then we see












































I 0I 1I 2I 3I 4I 5
100001
010000
001000
000100
000010
In this case, the sequence of vectors I k fails to converge. Why is this? The second eigenvalue of the matrix S satisfies $ |\lambda_2|=1 $  and so the argument we gave to justify the power method no longer holds. To guarantee that $ |\lambda_2| , we need the matrix S to be primitive</i>. This means that, for some <i>m</i>, <b>S</b><i>m</i> has all positive entries. In other words, if we are given two pages, it is possible to get from the first page to the second after following <i>m</i> links. Clearly, our most recent example does not satisfy this property. In a moment, we will see how to modify our matrix <b>S</b> to obtain a primitive, stochastic matrix, which therefore satisfies <IMG alt=S is






Google Page Rank Explained - The Ethical Hackingwith stationary vectorGoogle Page Rank Explained - The Ethical Hacking
Notice that the PageRanks assigned to the first four web pages are zero. However, this doesn't feel right: each of these pages has links coming to them from other pages. Clearly, somebody likes these pages! Generally speaking, we want the importance rankings of all pages to be positive. The problem with this example is that it contains a smaller web within it, shown in the blue box below.Google Page Rank Explained - The Ethical Hacking Links come into this box, but none go out. Just as in the example of the dangling node we discussed above, these pages form an "importance sink" that drains the importance out of the other four pages. This happens when the matrix S is reducible; that is, S can be written in block form as \[ <br />S=\left[\begin{array}{cc} * & 0 \\ * & * \end{array}\right].<br /> \]  Indeed, if the matrix S is irreducible, we can guarantee that there is a stationary vector with all positive entries. A web is called strongly connected if, given any two pages, there is a way to follow links from the first page to the second. Clearly, our most recent example is not strongly connected. However, strongly connected webs provide irreducible matrices S. To summarize, the matrix S is stochastic, which implies that it has a stationary vector. However, we need S to also be (a) primitive so that $ |\lambda_2| and (b) irreducible so that the stationary vector has all positive entries.   A final modification</H3>  To find a new matrix that is both primitive and irreducible, we will modify the way our random surfer moves through the web. As it stands now, the movement of our random surfer is determined by <b>S</b>: either he will follow one of the links on his current page or, if at a page with no links, randomly choose any other page to move to. To make our modification, we will first choose a parameter <IMG alt= between 0 and 1. Now suppose that our random surfer moves in a slightly different way. With probability $\alpha$  , he is guided by S. With probability $ 1-\alpha $  , he chooses the next page at random. If we denote by 1the $ n\times n $  matrix whose entries are all one, we obtain the Google matrix\[ <br />{\bf G}=\alpha{\bf S}+ (1-\alpha)\frac{1}{n}{\bf 1}<br /> \]  Notice now that G is stochastic as it is a combination of stochastic matrices. Furthermore, all the entries of G are positive, which implies that G is both primitive and irreducible. Therefore, G has a unique stationary vector I that may be found using the power method. The role of the parameter $\alpha$  is an important one. Notice that if $ \alpha=1 $  , then G = S. This means that we are working with the original hyperlink structure of the web. However, if $ \alpha=0 $  , then $ {\bf G}=1/n{\bf 1} $  . In other words, the web we are considering has a link between any two pages and we have lost the original hyperlink structure of the web. Clearly, we would like to take $\alpha$  close to 1 so that we hyperlink structure of the web is weighted heavily into the computation. However, there is another consideration. Remember that the rate of convergence of the power method is governed by the magnitude of the second eigenvalue $ |\lambda_2| $  . For the Google matrix, it has been proven that the magnitude of the second eigenvalue $ |\lambda_2|=\alpha $  . This means that when $\alpha$  is close to 1 the convergence of the power method will be very slow. As a compromise between these two competing interests, Serbey Brin and Larry Page, the creators of PageRank, chose $ \alpha=0.85 $  .


Computing I


What we've described so far looks like a good theory, but remember that we need to apply it to $ n\times n $  matrices where n is about 25 billion! In fact, the power method is especially well-suited to this situation. Remember that the stochastic matrix S may be written as \[ <br />{\bf S}={\bf H} + {\bf A}<br /> \]  and therefore the Google matrix has the form \[ <br />{\bf G}=\alpha{\bf H} + \alpha{\bf A} +<br />\frac{1-\alpha}{n}{\bf 1}<br /> \]  Therefore, \[ <br />{\bf G}I^k=\alpha{\bf H}I^k + \alpha{\bf A}I^k +<br />\frac{1-\alpha}{n}{\bf 1}I^k<br /> \]  Now recall that most of the entries in H are zero; on average, only ten entries per column are nonzero. Therefore, evaluating HI k requires only ten nonzero terms for each entry in the resulting vector. Also, the rows of A are all identical as are the rows of 1. Therefore, evaluating AI k and 1I k amounts to adding the current importance rankings of the dangling nodes or of all web pages. This only needs to be done once. With the value of $\alpha$  chosen to be near 0.85, Brin and Page report that 50 - 100 iterations are required to obtain a sufficiently good approximation to I. The calculation is reported to take a few days to complete. Of course, the web is continually changing. First, the content of web pages, especially for news organizations, may change frequently. In addition, the underlying hyperlink structure of the web changes as pages are added or removed and links are added or removed. It is rumored that Google recomputes the PageRank vector I roughly every month. Since the PageRank of pages can be observed to fluctuate considerably during this time, it is known to some as the Google Dance. (In 2002, Google held a Google Dance!)


Summary


Brin and Page introduced Google in 1998, a time when the pace at which the web was growing began to outstrip the ability of current search engines to yield useable results. At that time, most search engines had been developed by businesses who were not interested in publishing the details of how their products worked. In developing Google, Brin and Page wanted to "push more development and understanding into the academic realm." That is, they hoped, first of all, to improve the design of search engines by moving it into a more open, academic environment. In addition, they felt that the usage statistics for their search engine would provide an interesting data set for research. It appears that the federal government, which recently tried to gain some of Google's statistics, feels the same way. There are other algorithms that use the hyperlink structure of the web to rank the importance of web pages. One notable example is the HITS algorithm, produced by Jon Kleinberg, which forms the basis of the Teoma search engine. In fact, it is interesting to compare the results of searches sent to different search engines as a way to understand why some complain of a Googleopoly.

ShareThis

Related Posts Plugin for WordPress, Blogger...

 
Design by Wordpress Theme | Bloggerized by Free Blogger Templates | coupon codes