Ignore the scores produced by xAIgent. Given a large collection of documents (e.g., web pages), score each keyphrase by the percentage of documents for which the given keyphrase was suggested by the xAIgent. (Example: "The keyphrase 'corporate merger' was generated for 45 of the 100 documents. Thus 'corporate merger' has a score of 45%.")
Take the score produced by the xAIgent and normalize it so that it ranges from 0% to 100%, by dividing the score of each keyphrase by the score of the first keyphrase. (The first keyphrase always has the highest score.) (Example: xAIgent suggests three phrases: 'corporate merger' with a score of 50, 'stocks' with a score of 30, and 'bonds' with a score of 10. The normalized scores are 100%, 60%, and 20%, respectively.)
Longer documents often seem to have better keyphrases than shorter documents. The problem with suggestion (2) is that it ignores the document length. One possibility would be to multiply the normalized score of (2) by (say) the logarithm of the length of the document (measured in number of words or in bytes). Another possibility would be to sort the document collection by length and increase the score of documents according to the percentile in which they appear. (Example: "The keyphrase 'corporate merger' appears in document #345. The keyphrase has a normalized score of 60%. However, since document #345 is in the top 25 percentile of documents in the collection, according to length, we will boost the score of 'corporate merger' by 20%, for an adjusted score of 80%.") To reduce the size of the list slightly, you might drop words that have the same stem (e.g., "automobile" and "automobiles"). If you want to substantially reduce the size of the list, then you can assign a normalized score to each keyphrase and select the keyphrases with the highest normalized scores. Doc-Tags is a really good commercial example of managing collections of documents and their extracted keyphrases.