How to improve the accuracy of text optimization through LLM
Fact: AI is bad at counting because it runs essentially on probability prediction algorithms.
Let's do a little mental experiment that should give you an immediate understanding.
Imagine you are given a task: write a 10,000-word article and put 100 keys, lemmas and phrases into it, and they don't give you any counting tools. They don't give you a way to count and keep a register of keyword stuffing. You've got the task, the introductions, keep the keys in your head and have to produce everything in one go. The text will be written, some keys will be entered, but it will be very inaccurate. How can a person without tools enter and calculate everything correctly in one pass?
To do this, you need to number each word at once when writing and keep a key-insertion register:
today was the excellent chance to buy smartphone in our store .
And here I think you realized that you have to ask the LLM to do something to make it start counting more or less accurately.
My experience
I have been able to increase the accuracy of SEO text optimization using LLM, especially when you need to type in a lot of phrases for deep optimization. In my experience, the success rate of an optimization task goes from 40-50 % if you just give the AI the task of entering phrases to 70-80 % if you give it the task of tagging keys in the text.
And at the end, be sure to ask the LLM to do a self-check on the occurrences. And write a mini report.
And for counting characters, number of words, the best is still «Program-of-Thought», when AI writes code while thinking in order to count something.
But if you need to generate something in large quantities by «non-thinking models», the approach described in the post can raise the quality.
P.S.. LSI is a notation. LSI is the Xerox and Pampers of the SEO world. I've already come to terms with it myself. We need to popularize the new term somehow, like qbst phrases.